Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 2 |
Descriptor
Criterion Referenced Tests | 2 |
Program Validation | 2 |
Academic Standards | 1 |
Achievement Gains | 1 |
Achievement Rating | 1 |
Calculus | 1 |
Content Validity | 1 |
Correlation | 1 |
Essays | 1 |
Evaluators | 1 |
Expertise | 1 |
More ▼ |
Author
Bagley, Spencer | 1 |
Clements, Nathan | 1 |
Duchnowski, Matthew P. | 1 |
Escoffery, David S. | 1 |
Gleason, Jim | 1 |
Powers, Donald E. | 1 |
Rice, Lisa | 1 |
Thomas, Matt | 1 |
White, Diana | 1 |
Publication Type
Reports - Research | 2 |
Journal Articles | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Postsecondary Education | 2 |
Higher Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Gleason, Jim; Thomas, Matt; Bagley, Spencer; Rice, Lisa; White, Diana; Clements, Nathan – North American Chapter of the International Group for the Psychology of Mathematics Education, 2015
We present findings from an analysis of the Calculus Concept Inventory. Analysis of data from over 1500 students across four institutions indicates that there are deficiencies in the instrument. The analysis showed the data is consistent with a unidimensional model and does not have strong enough reliability for its intended use. This finding…
Descriptors: Calculus, Content Validity, Mathematical Concepts, Test Reliability
Powers, Donald E.; Escoffery, David S.; Duchnowski, Matthew P. – Applied Measurement in Education, 2015
By far, the most frequently used method of validating (the interpretation and use of) automated essay scores has been to compare them with scores awarded by human raters. Although this practice is questionable, human-machine agreement is still often regarded as the "gold standard." Our objective was to refine this model and apply it to…
Descriptors: Essays, Test Scoring Machines, Program Validation, Criterion Referenced Tests