NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)4
Since 2006 (last 20 years)5
Audience
Location
Georgia1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 47 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Ford, Jeremy W.; Conoyer, Sarah J.; Lembke, Erica S.; Smith, R. Alex; Hosp, John L. – Assessment for Effective Intervention, 2018
In the present study, two types of curriculum-based measurement (CBM) tools in science, Vocabulary Matching (VM) and Statement Verification for Science (SV-S), a modified Sentence Verification Technique, were compared. Specifically, this study aimed to determine whether the format of information presented (i.e., SV-S vs. VM) produces differences…
Descriptors: Curriculum Based Assessment, Evaluation Methods, Measurement Techniques, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Allen, Abigail A.; Poch, Apryl L.; Lembke, Erica S. – Learning Disability Quarterly, 2018
This manuscript describes two empirical studies of alternative scoring procedures used with curriculum-based measurement in writing (CBM-W). Study 1 explored the technical adequacy of a trait-based rubric in first grade. Study 2 explored the technical adequacy of a trait-based rubric, production-dependent, and production-independent scores in…
Descriptors: Scoring, Alternative Assessment, Curriculum Based Assessment, Emergent Literacy
Peer reviewed Peer reviewed
Direct linkDirect link
LaBelle, Sara; Johnson, Zac D. – Communication Education, 2018
Three studies were conducted to generate a valid and reliable instrument to measure student-to-student confirmation. Study One (N = 396) sought to establish a factor structure based on previous research. Study Two (N = 396) sought to confirm this factor structure and assess criterion-related validity. Study Three (N = 283) sought to assess…
Descriptors: College Students, Peer Relationship, Interpersonal Communication, Measures (Individuals)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Guffey, Sarah Katie; Slater, Timothy F.; Slater, Stephanie J. – Journal of Astronomy & Earth Sciences Education, 2017
Geoscience education researchers have considerable need for criterion-referenced, easy-to-administer, easy-to-score, conceptual surveys for undergraduates taking introductory science survey courses in order for faculty to monitor the learning impacts of innovative teaching. In response, this study establishes the reliability and validity of a…
Descriptors: Geology, Scientific Concepts, Science Tests, Undergraduate Students
Peer reviewed Peer reviewed
Direct linkDirect link
Cunningham, James W.; Mesmer, Heidi Anne – Elementary School Journal, 2014
Common Core Reading Standard 10 not only prescribes the difficulty of texts students should become able to read, but also the difficulty diet of texts schools should ask their students to read across the school year. The use of quantitative text-assessment tools in the implementation of this standard warrants an examination into the validity of…
Descriptors: Difficulty Level, Academic Standards, State Standards, Statistical Analysis
Marshall, J. Laird; Haertel, Edward H. – 1975
For classical, norm-referenced test reliability, Cronbach's alpha has been shown to be equal to the mean of all possible split-half Pearson product-moment correlation coefficients, adjusted by the Spearman-Brown prophecy formula. For criterion-referenced test reliability, in an analogous vein, this paper provides the rationale behind, the analysis…
Descriptors: Criterion Referenced Tests, Statistical Analysis, Test Reliability
Peer reviewed Peer reviewed
Swaminathan, Hariharan; And Others – Journal of Educational Measurement, 1974
It is proposed that the reliability of criterion-referenced test scores be defined in terms of the consistency of the decision-making process across repeated administrations of the test. (Author/RC)
Descriptors: Criterion Referenced Tests, Decision Making, Statistical Analysis, Test Reliability
Huynh, Huynh – 1977
The kappamax reliability index of domain-referenced tests is defined as the upper bound of kappa when all possibile cutoff scores are considered. Computational procedures for kappamax are described, as well as its approximation for long tests, based on Kuder-Richardson formula 21. The sampling error of kappamax, and the effects of test length and…
Descriptors: Criterion Referenced Tests, Mathematical Models, Statistical Analysis, Test Reliability
Subkoviak, Michael J. – 1976
A number of different definitions and indices of reliability for mastery tests have recently been proposed in an attempt to cope with possible lack of score variability that attenuates traditional coefficients. One promising index that has been suggested is the proportion of students in a group that are consistently assigned to the same mastery…
Descriptors: Criterion Referenced Tests, Mastery Tests, Mathematical Models, Scores
Berk, Ronald A. – 1980
Seventeen statistics for measuring the reliability of criterion-referenced tests were critically reviewed. The review was organized into two sections: (1) a discussion of preliminary considerations to provide a foundation for choosing the appropriate category of "reliability" (threshold loss function, squared-error loss-function, or…
Descriptors: Criterion Referenced Tests, Cutting Scores, Scoring Formulas, Statistical Analysis
Moyer, Judith E.; Fishbein, Ronald L. – 1977
The problem that this research addressed was one of decision making. Given three sets of criterion-referenced tests which were designed to be parallel in content, would a traditional reliability coefficient produce different decisions about the reliability of those tests than would kappa? The procedure used collected statewide results on 136 test…
Descriptors: Analysis of Variance, Comparative Analysis, Criterion Referenced Tests, Measurement Techniques
Willoughby, Lee; And Others – 1976
This study compared a domain referenced approach with a traditional psychometric approach in the construction of a test. Results of the December, 1975 Quarterly Profile Exam (QPE) administered to 400 examinees at a university were the source of data. The 400 item QPE is a five alternative multiple choice test of information a "safe"…
Descriptors: Comparative Analysis, Criterion Referenced Tests, Norm Referenced Tests, Statistical Analysis
Downing, Steven M.; Mehrens, William A. – 1978
Four criterion-referenced reliability coefficicents were compared to the Kuder-Richardson estimates and to each other. The Kuder-Richardson formulas 20 and 21, the Livingston, the Subkoviak and two Huynh coefficients were computed for a random sample of 33 criterion-referenced tests. The Subkoviak coefficient yielded the highest mean value;…
Descriptors: Career Development, Comparative Analysis, Criterion Referenced Tests, Factor Analysis
Hsu, Tse-Chi – 1971
A good criterion-referenced test item is defined as the one which allows the individual to answer correctly if he masters the criterion behavior represented by the item and answer incorrectly if he actually does not master it. Therefore, a good discriminating item for criterion-referenced tests is the one which has a larger proportion of correct…
Descriptors: Behavioral Objectives, Correlation, Criterion Referenced Tests, Individualized Instruction
Edmonston, Leon P.; Randall, Robert S. – 1972
A decision model designed to determine the reliability and validity of criterion referenced measures (CRMs) is presented. General procedures which pertain to the model are discussed as to: Measures of relationship, Reliability, Validity (content, criterion-oriented, and construct validation), and Item Analysis. The decision model is presented in…
Descriptors: Criterion Referenced Tests, Decision Making, Evaluation Methods, Item Analysis
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4