NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 19 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Al-zboon, Habis Saad; Alrekebat, Amjad Farhan – International Journal of Higher Education, 2021
This study aims at identifying the effect of multiple-choice test items' difficulty degree on the reliability coefficient and the standard error of measurement depending on the item response theory IRT. To achieve the objectives of the study, (WinGen3) software was used to generate the IRT parameters (difficulty, discrimination, guessing) for four…
Descriptors: Multiple Choice Tests, Test Items, Difficulty Level, Error of Measurement
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Koçak, Duygu – International Journal of Progressive Education, 2020
The aim of this study was to determine the effect of chance success on test equalization. For this purpose, artificially generated 500 and 1000 sample size data sets were synchronized using linear equalization and equal percentage equalization methods. In the data which were produced as a simulative, a total of four cases were created with no…
Descriptors: Test Theory, Equated Scores, Error of Measurement, Sample Size
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Istiyono, Edi; Dwandaru, Wipsar Sunu Brams; Lede, Yulita Adelfin; Rahayu, Farida; Nadapdap, Amipa – International Journal of Instruction, 2019
The objective of this study was to develop Physics critical thinking skill test using computerized adaptive test (CAT) based on item response theory (IRT). This research was a development research using 4-D (define, design, develop, and disseminate). The content validity of the items was proven using Aiken's V. The test trial involved 252 students…
Descriptors: Critical Thinking, Thinking Skills, Cognitive Tests, Physics
Schoen, Robert C.; Yang, Xiaotong; Tazaz, Amanda M.; Bray, Wendy S.; Farina, Kristy – Grantee Submission, 2019
The "2016 Knowledge for Teaching Early Elementary Mathematics" (2016 K-TEEM) test measures teachers' mathematical knowledge for teaching early elementary mathematics. The 2016 K-TEEM is the third version of the K-TEEM (Schoen, Bray, Wolfe, Tazaz, & Nielsen, 2017). In this report, we present results of the first large-scale field test…
Descriptors: Test Construction, Elementary School Mathematics, Elementary School Teachers, Knowledge Base for Teaching
Schoen, Robert C.; Yang, Xiaotong; Paek, Insu – Grantee Submission, 2018
This report provides evidence of the substantive and structural validity of the Knowledge for Teaching Elementary Fractions Test. Field-test data were gathered with a sample of 241 elementary educators, including teachers, administrators, and instructional support personnel, in spring 2017, as part of a larger study involving a multisite…
Descriptors: Psychometrics, Pedagogical Content Knowledge, Mathematics Tests, Mathematics Instruction
Schoen, Robert C.; Yang, Xiaotong; Liu, Sicong; Paek, Insu – Grantee Submission, 2017
The Early Fractions Test v2.2 is a paper-pencil test designed to measure mathematics achievement of third- and fourth-grade students in the domain of fractions. The purpose, or intended use, of the Early Fractions Test v2.2 is to serve as a measure of student outcomes in a randomized trial designed to estimate the effect of an educational…
Descriptors: Psychometrics, Mathematics Tests, Mathematics Achievement, Fractions
Irvin, P. Shawn; Alonzo, Julie; Lai, Cheng-Fei; Park, Bitnara Jasmine; Tindal, Gerald – Behavioral Research and Teaching, 2012
In this technical report, we present the results of a reliability study of the seventh-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,…
Descriptors: Reading Comprehension, Testing Programs, Statistical Analysis, Grade 7
Peer reviewed Peer reviewed
Direct linkDirect link
Bristow, M.; Erkorkmaz, K.; Huissoon, J. P.; Jeon, Soo; Owen, W. S.; Waslander, S. L.; Stubley, G. D. – IEEE Transactions on Education, 2012
Any meaningful initiative to improve the teaching and learning in introductory control systems courses needs a clear test of student conceptual understanding to determine the effectiveness of proposed methods and activities. The authors propose a control systems concept inventory. Development of the inventory was collaborative and iterative. The…
Descriptors: Diagnostic Tests, Concept Formation, Undergraduate Students, Engineering Education
Peer reviewed Peer reviewed
Shoemaker, David M. – Educational and Psychological Measurement, 1972
Descriptors: Difficulty Level, Error of Measurement, Item Sampling, Simulation
Peer reviewed Peer reviewed
Diamond, James J.; McCormick, Janet – Evaluation and the Health Professions, 1986
Using item responses from an in-training examination in diagnostic radiology, the application of a strength of association statistic to the general problem of item analysis is illustrated. Criteria for item selection, general issues of reliability, and error of measurement are discussed. (Author/LMO)
Descriptors: Achievement Tests, Difficulty Level, Error of Measurement, Graduate Medical Education
Peer reviewed Peer reviewed
Feldt, Leonard S. – Educational and Psychological Measurement, 1984
The binomial error model includes form-to-form difficulty differences as error variance and leads to Ruder-Richardson formula 21 as an estimate of reliability. If the form-to-form component is removed from the estimate of error variance, the binomial model leads to KR 20 as the reliability estimate. (Author/BW)
Descriptors: Achievement Tests, Difficulty Level, Error of Measurement, Mathematical Formulas
Divgi, D. R. – 1978
One aim of criterion-referenced testing is to classify an examinee without reference to a norm group; therefore, any statements about the dependability of such classification ought to be group-independent also. A population-independent index is proposed in terms of the probability of incorrect classification near the cutoff true score. The…
Descriptors: Criterion Referenced Tests, Cutting Scores, Difficulty Level, Error of Measurement
Peer reviewed Peer reviewed
Straton, Ralph G.; Catts, Ralph M. – Educational and Psychological Measurement, 1980
Multiple-choice tests composed entirely of two-, three-, or four-choice items were investigated. Results indicated that number of alternatives per item was inversely related to item difficulty, but directly related to item discrimination. Reliability and standard error of measurement of three-choice item tests was equivalent or superior.…
Descriptors: Difficulty Level, Error of Measurement, Foreign Countries, Higher Education
Saunders, Joseph C.; Huynh, Huynh – 1980
In most reliability studies, the precision of a reliability estimate varies inversely with the number of examinees (sample size). Thus, to achieve a given level of accuracy, some minimum sample size is required. An approximation for this minimum size may be made if some reasonable assumptions regarding the mean and standard deviation of the test…
Descriptors: Cutting Scores, Difficulty Level, Error of Measurement, Mastery Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Hintze, John M.; Christ, Theodore J. – School Psychology Review, 2004
This study examined the effects of controlling the level of difficulty on the sensitivity of repeated curriculum-based measurement (CBM). Participants included 99 students in Grades 2 through 5 who were administered CBM reading passage probes twice weekly over an 11-week period. Two sets of CBM reading progress monitoring materials were compared:…
Descriptors: Error of Measurement, Curriculum Based Assessment, Elementary Education, Elementary School Students
Previous Page | Next Page »
Pages: 1  |  2