NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)5
Since 2006 (last 20 years)15
Audience
Laws, Policies, & Programs
Assessments and Surveys
Work Keys (ACT)1
What Works Clearinghouse Rating
Showing 1 to 15 of 47 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Feinberg, Richard A.; von Davier, Matthias – Journal of Educational and Behavioral Statistics, 2020
The literature showing that subscores fail to add value is vast; yet despite their typical redundancy and the frequent presence of substantial statistical errors, many stakeholders remain convinced of their necessity. This article describes a method for identifying and reporting unexpectedly high or low subscores by comparing each examinee's…
Descriptors: Scores, Probability, Statistical Distributions, Ability
Peer reviewed Peer reviewed
Direct linkDirect link
Tampieri, Alessandro – Education Economics, 2016
This paper argues that assortative matching may explain over-education. Education determines individuals' income and, due to the presence of assortative matching, the quality of partners in personal, social and working life. Thus, an individual acquires education to improve the expected partners' quality. However, since every individual of the…
Descriptors: Educational Attainment, Overachievement, Education Work Relationship, Interpersonal Relationship
Peer reviewed Peer reviewed
Direct linkDirect link
Shulruf, Boaz; Poole, Phillippa; Jones, Philip; Wilkinson, Tim – Assessment & Evaluation in Higher Education, 2015
A new probability-based standard setting technique, the Objective Borderline Method (OBM), was introduced recently. This was based on a mathematical model of how test scores relate to student ability. The present study refined the model and tested it using 2500 simulated data-sets. The OBM was feasible to use. On average, the OBM performed well…
Descriptors: Probability, Methods, Standard Setting (Scoring), Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Maeda, Hotaka; Zhang, Bo – International Journal of Testing, 2017
The omega (?) statistic is reputed to be one of the best indices for detecting answer copying on multiple choice tests, but its performance relies on the accurate estimation of copier ability, which is challenging because responses from the copiers may have been contaminated. We propose an algorithm that aims to identify and delete the suspected…
Descriptors: Cheating, Test Items, Mathematics, Statistics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Çokluk, Ömay; Gül, Emrah; Dogan-Gül, Çilem – Educational Sciences: Theory and Practice, 2016
The study aims to examine whether differential item function is displayed in three different test forms that have item orders of random and sequential versions (easy-to-hard and hard-to-easy), based on Classical Test Theory (CTT) and Item Response Theory (IRT) methods and bearing item difficulty levels in mind. In the correlational research, the…
Descriptors: Test Bias, Test Items, Difficulty Level, Test Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Shulruf, Boaz; Jones, Phil; Turner, Rolf – Higher Education Studies, 2015
The determination of Pass/Fail decisions over Borderline grades, (i.e., grades which do not clearly distinguish between the competent and incompetent examinees) has been an ongoing challenge for academic institutions. This study utilises the Objective Borderline Method (OBM) to determine examinee ability and item difficulty, and from that…
Descriptors: Undergraduate Students, Pass Fail Grading, Decision Making, Probability
Peer reviewed Peer reviewed
Direct linkDirect link
Gagne, Francoys – High Ability Studies, 2012
From past knowledge of Ziegler's and Phillipson's work, the author knew before reading the manuscript that there would be significant conceptual disagreements. Yet, he was hoping to find enough points of convergence that they could lead to enriching exchanges and, maybe, future shared efforts at bridging gaps between their respective views.…
Descriptors: Gifted, Models, Probability, Statistics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sacristán, Ana Isabel, Ed.; Cortés-Zavala, José Carlos, Ed.; Ruiz-Arias, Perla Marysol, Ed. – North American Chapter of the International Group for the Psychology of Mathematics Education, 2020
These proceedings are a written record of the research presented at the 42nd annual meeting of the North American Chapter of the International Group for the Psychology of Mathematics Education (PME-NA) held in Mazatlán, Mexico, virtually beginning May 27, 2021 and in-person June 2-6, 2021. The conference was originally scheduled to take place…
Descriptors: Mathematics Education, Teaching Methods, Cultural Differences, Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Hung-Yu; Wang, Wen-Chung – Journal of Educational Measurement, 2014
The DINA (deterministic input, noisy, and gate) model has been widely used in cognitive diagnosis tests and in the process of test development. The outcomes known as slip and guess are included in the DINA model function representing the responses to the items. This study aimed to extend the DINA model by using the random-effect approach to allow…
Descriptors: Models, Guessing (Tests), Probability, Ability
Peer reviewed Peer reviewed
Direct linkDirect link
Köhler, Carmen; Pohl, Steffi; Carstensen, Claus H. – Educational and Psychological Measurement, 2015
When competence tests are administered, subjects frequently omit items. These missing responses pose a threat to correctly estimating the proficiency level. Newer model-based approaches aim to take nonignorable missing data processes into account by incorporating a latent missing propensity into the measurement model. Two assumptions are typically…
Descriptors: Competence, Tests, Evaluation Methods, Adults
Peer reviewed Peer reviewed
Direct linkDirect link
Schuster, Christof; Yuan, Ke-Hai – Journal of Educational and Behavioral Statistics, 2011
Because of response disturbances such as guessing, cheating, or carelessness, item response models often can only approximate the "true" individual response probabilities. As a consequence, maximum-likelihood estimates of ability will be biased. Typically, the nature and extent to which response disturbances are present is unknown, and, therefore,…
Descriptors: Computation, Item Response Theory, Probability, Maximum Likelihood Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Wen-Chung; Huang, Sheng-Yun – Educational and Psychological Measurement, 2011
The one-parameter logistic model with ability-based guessing (1PL-AG) has been recently developed to account for effect of ability on guessing behavior in multiple-choice items. In this study, the authors developed algorithms for computerized classification testing under the 1PL-AG and conducted a series of simulations to evaluate their…
Descriptors: Computer Assisted Testing, Classification, Item Analysis, Probability
Peer reviewed Peer reviewed
Direct linkDirect link
Maris, Gunter; Bechger, Timo – Measurement: Interdisciplinary Research and Perspectives, 2009
This paper addresses two problems relating to the interpretability of the model parameters in the three parameter logistic model. First, it is shown that if the values of the discrimination parameters are all the same, the remaining parameters are nonidentifiable in a nontrivial way that involves not only ability and item difficulty, but also the…
Descriptors: Item Response Theory, Models, Ability, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Guo, Hongwen; Oh, Hyeonjoo J. – ETS Research Report Series, 2009
In operational equating, frequency estimation (FE) equipercentile equating is often excluded from consideration when the old and new groups have a large ability difference. This convention may, in some instances, cause the exclusion of one competitive equating method from the set of methods under consideration. In this report, we study the…
Descriptors: Equated Scores, Computation, Statistical Analysis, Test Items
Vogt, Dorothee K. – 1971
The Rasch model for the probability of a person's response to an item is extended to the case where this response depends on a set of scoring or category weights, in addition to person and item parameters. The maximum likelihood approach introduced by Wright for the dichotomous case is applicable here also, and it is shown to yield a unique…
Descriptors: Ability, Academic Ability, Academic Achievement, Attitude Measures
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4