NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
Elementary and Secondary…1
What Works Clearinghouse Rating
Showing 16 to 30 of 418 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Guo, Hongwen; Dorans, Neil J. – ETS Research Report Series, 2019
We derive formulas for the differential item functioning (DIF) measures that two routinely used DIF statistics are designed to estimate. The DIF measures that match on observed scores are compared to DIF measures based on an unobserved ability (theta or true score) for items that are described by either the one-parameter logistic (1PL) or…
Descriptors: Scores, Test Bias, Statistical Analysis, Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Selvi, Hüseyin – Higher Education Studies, 2020
This study aimed to examine the effect of using items from previous exams on students? pass-fail rates and on the psychometric properties of the tests and items. The study included data from 115 tests and 11,500 items used in the midterm and final exams of 3,910 students in the preclinical term at the Faculty of Medicine from 2014 to 2019. Data…
Descriptors: Answer Keys, Tests, Test Items, True Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Schumacker, Randall – Measurement: Interdisciplinary Research and Perspectives, 2019
The R software provides packages and functions that provide data analysis in classical true score, generalizability theory, item response theory, and Rasch measurement theories. A brief list of notable articles in each measurement theory and the first measurement journals is followed by a list of R psychometric software packages. Each psychometric…
Descriptors: Psychometrics, Computer Software, Measurement, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Zhang, Zhonghua – Applied Measurement in Education, 2020
The characteristic curve methods have been applied to estimate the equating coefficients in test equating under the graded response model (GRM). However, the approaches for obtaining the standard errors for the estimates of these coefficients have not been developed and examined. In this study, the delta method was applied to derive the…
Descriptors: Error of Measurement, Computation, Equated Scores, True Scores
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Wang, Tianqi; Jing, Xia; Li, Qi; Gao, Jing; Tang, Jie – International Educational Data Mining Society, 2019
Massive Open Online Courses (MOOCs) have become more and more popular recently. These courses have attracted a large number of students world-wide. In a popular course, there may be thousands of students. Such a large number of students in one course makes it infeasible for the instructors to grade all the submissions. Peer assessment is thus an…
Descriptors: Peer Evaluation, Accuracy, Grades (Scholastic), Grading
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Han, Yong; Wu, Wenjun; Ji, Suozhao; Zhang, Lijun; Zhang, Hui – International Educational Data Mining Society, 2019
Peer-grading is commonly adopted by instructors as an effective assessment method for MOOCs (Massive Open Online Courses) and SPOCs (Small Private online course). For solving the problems brought by varied skill levels and attitudes of online students, statistical models have been proposed to improve the fairness and accuracy of peer-grading.…
Descriptors: Peer Evaluation, Grading, Online Courses, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Dimitrov, Dimiter M.; Marcoulides, George A.; Harrison, Michael – Educational and Psychological Measurement, 2019
Building on prior research on the relationships between key concepts in item response theory and classical test theory, this note contributes to highlighting their important and useful links. A readily and widely applicable latent variable modeling procedure is discussed that can be used for point and interval estimation of the individual person…
Descriptors: True Scores, Item Response Theory, Test Items, Test Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Dimitrov, Dimiter M. – Educational and Psychological Measurement, 2020
This study presents new models for item response functions (IRFs) in the framework of the D-scoring method (DSM) that is gaining attention in the field of educational and psychological measurement and largescale assessments. In a previous work on DSM, the IRFs of binary items were estimated using a logistic regression model (LRM). However, the LRM…
Descriptors: Item Response Theory, Scoring, True Scores, Scaling
Peer reviewed Peer reviewed
Direct linkDirect link
Diao, Hongyu; Keller, Lisa – Applied Measurement in Education, 2020
Examinees who attempt the same test multiple times are often referred to as "repeaters." Previous studies suggested that repeaters should be excluded from the total sample before equating because repeater groups are distinguishable from non-repeater groups. In addition, repeaters might memorize anchor items, causing item drift under a…
Descriptors: Licensing Examinations (Professions), College Entrance Examinations, Repetition, Testing Problems
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Marcoulides, George A.; Patelis, Thanos – Educational and Psychological Measurement, 2015
A critical discussion of the assumption of uncorrelated errors in classical psychometric theory and its applications is provided. It is pointed out that this assumption is essential for a number of fundamental results and underlies the concept of parallel tests, the Spearman-Brown's prophecy and the correction for attenuation formulas as well as…
Descriptors: Psychometrics, Correlation, Validity, Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Cohen, Yoav; Levi, Effi; Ben-Simon, Anat – Applied Measurement in Education, 2018
In the current study, two pools of 250 essays, all written as a response to the same prompt, were rated by two groups of raters (14 or 15 raters per group), thereby providing an approximation to the essay's true score. An automated essay scoring (AES) system was trained on the datasets and then scored the essays using a cross-validation scheme. By…
Descriptors: Test Validity, Automation, Scoring, Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Yao, Lili; Haberman, Shelby J.; Zhang, Mo – ETS Research Report Series, 2019
Many assessments of writing proficiency that aid in making high-stakes decisions consist of several essay tasks evaluated by a combination of human holistic scores and computer-generated scores for essay features such as the rate of grammatical errors per word. Under typical conditions, a summary writing score is provided by a linear combination…
Descriptors: Prediction, True Scores, Computer Assisted Testing, Scoring
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Phillips, Gary W.; Jiang, Tao – Practical Assessment, Research & Evaluation, 2016
Power analysis is a fundamental prerequisite for conducting scientific research. Without power analysis the researcher has no way of knowing whether the sample size is large enough to detect the effect he or she is looking for. This paper demonstrates how psychometric factors such as measurement error and equating error affect the power of…
Descriptors: Error of Measurement, Statistical Analysis, Equated Scores, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Tao, Wei; Cao, Yi – Applied Measurement in Education, 2016
Current procedures for equating number-correct scores using traditional item response theory (IRT) methods assume local independence. However, when tests are constructed using testlets, one concern is the violation of the local item independence assumption. The testlet response theory (TRT) model is one way to accommodate local item dependence.…
Descriptors: Item Response Theory, Equated Scores, Test Format, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Cher Wong, Cheow – Journal of Educational Measurement, 2015
Building on previous works by Lord and Ogasawara for dichotomous items, this article proposes an approach to derive the asymptotic standard errors of item response theory true score equating involving polytomous items, for equivalent and nonequivalent groups of examinees. This analytical approach could be used in place of empirical methods like…
Descriptors: Item Response Theory, Error of Measurement, True Scores, Equated Scores
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  28