NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 1 to 15 of 111 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Dimitrov, Dimiter M.; Atanasov, Dimitar V. – Measurement: Interdisciplinary Research and Perspectives, 2021
This study offers an approach to test equating under the latent D-scoring method (DSM-L) using the nonequivalent groups with anchor tests (NEAT) design. The accuracy of the test equating was examined via a simulation study under a 3 × 3 design by two conditions: group ability at three levels and test difficulty at three levels. The results for…
Descriptors: Equated Scores, Scoring, Test Items, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Huelmann, Thorben; Debelak, Rudolf; Strobl, Carolin – Journal of Educational Measurement, 2020
This study addresses the topic of how anchoring methods for differential item functioning (DIF) analysis can be used in multigroup scenarios. The direct approach would be to combine anchoring methods developed for two-group scenarios with multigroup DIF-detection methods. Alternatively, multiple tests could be carried out. The results of these…
Descriptors: Test Items, Test Bias, Equated Scores, Item Analysis
Wu, Tong – ProQuest LLC, 2023
This three-article dissertation aims to address three methodological challenges to ensure comparability in educational research, including scale linking, test equating, and propensity score (PS) weighting. The first study intends to improve test scale comparability by evaluating the effect of six missing data handling approaches, including…
Descriptors: Educational Research, Comparative Analysis, Equated Scores, Weighted Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Zhang, Zhonghua – Journal of Experimental Education, 2022
Reporting standard errors of equating has been advocated as a standard practice when conducting test equating. The two most widely applied procedures for standard errors of equating including the bootstrap method and the delta method are either computationally intensive or confined to the derivations of complicated formulas. In the current study,…
Descriptors: Error of Measurement, Item Response Theory, True Scores, Equated Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Dimitrov, Dimiter M.; Atanasov, Dimitar V. – Educational and Psychological Measurement, 2021
This study presents a latent (item response theory--like) framework of a recently developed classical approach to test scoring, equating, and item analysis, referred to as "D"-scoring method. Specifically, (a) person and item parameters are estimated under an item response function model on the "D"-scale (from 0 to 1) using…
Descriptors: Scoring, Equated Scores, Item Analysis, Item Response Theory
Mohammed Alqabbaa – ProQuest LLC, 2021
Psychometricians at an organization named the Education and Training Evaluation Commission (ETEC) developed a new test scoring method called the latent D-scoring method (DSM-L) where it is believed that the new method itself is much easier and more efficient to use compared to the Item Response Theory (IRT) method. However, there are no studies…
Descriptors: Item Response Theory, Scoring, Item Analysis, Equated Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Musa Adekunle Ayanwale – Discover Education, 2023
Examination scores obtained by students from the West African Examinations Council (WAEC), and National Business and Technical Examinations Board (NABTEB) may not be directly comparable due to differences in examination administration, item characteristics of the subject in question, and student abilities. For more accurate comparisons, scores…
Descriptors: Equated Scores, Mathematics Tests, Test Items, Test Format
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Supriyati, Yetti; Iriyadi, Deni; Falani, Ilham – Journal of Technology and Science Education, 2021
This study aims to develop a score equating application for computer-based school exams using parallel test kits with 25% anchor items. The items are arranged according to HOTS (High Order Thinking Skill) category, and use a scientific approach according to the physics lessons characteristics. Therefore, the questions were made using stimulus,…
Descriptors: Physics, Science Instruction, Teaching Methods, Equated Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Wiberg, Marie; von Davier, Alina A. – International Journal of Testing, 2017
We propose a comprehensive procedure for the implementation of a quality control process of anchor tests for a college admissions test with multiple consecutive administrations. We propose to examine the anchor tests and their items in connection with covariates to investigate if there was any unusual behavior in the anchor test results over time…
Descriptors: College Entrance Examinations, Test Items, Equated Scores, Quality Control
Peer reviewed Peer reviewed
Direct linkDirect link
Nielsen, T.; Dammeyer, J.; Vang, M. L.; Makransky, G. – Scandinavian Journal of Educational Research, 2018
Studies have reported gender differences in academic self-efficacy. However, how and if academic self-efficacy questionnaires are gender-biased has not been psychometrically investigated. The psychometric properties of a general version of The Physics Self-Efficacy Questionnaire -- the General Academic Self-Efficacy Scale (GASE) -- were analyzed…
Descriptors: Self Concept Measures, Self Efficacy, Sex Fairness, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Zu, Jiyun; Puhan, Gautam – Journal of Educational Measurement, 2014
Preequating is in demand because it reduces score reporting time. In this article, we evaluated an observed-score preequating method: the empirical item characteristic curve (EICC) method, which makes preequating without item response theory (IRT) possible. EICC preequating results were compared with a criterion equating and with IRT true-score…
Descriptors: Item Response Theory, Equated Scores, Item Analysis, Item Sampling
Bramley, Tom – Cambridge Assessment, 2018
The aim of the research reported here was to get some idea of the accuracy of grade boundaries (cut-scores) obtained by applying the 'similar items method' described in Bramley & Wilson (2016). In this method experts identify items on the current version of a test that are sufficiently similar to items on previous versions for them to be…
Descriptors: Accuracy, Cutting Scores, Test Items, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Madya, Suwarsih; Retnawati, Heri; Purnawan, Ari; Putro, Nur Hidayanto Pancoro Setyo; Apino, Ezi – TEFLIN Journal: A publication on the teaching and learning of English, 2019
This explorative-descriptive study set out to examine the equivalence among Test of English Proficiency (TOEP) forms, developed by the Indonesian Testing Service Centre (ITSC) and co-founded by The Association for The Teaching of English as a Foreign Language in Indonesia (TEFLIN) and The Association of Psychology in Indonesia. Using a…
Descriptors: Language Tests, Language Proficiency, English (Second Language), Second Language Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Kopf, Julia; Zeileis, Achim; Strobl, Carolin – Educational and Psychological Measurement, 2015
Differential item functioning (DIF) indicates the violation of the invariance assumption, for instance, in models based on item response theory (IRT). For item-wise DIF analysis using IRT, a common metric for the item parameters of the groups that are to be compared (e.g., for the reference and the focal group) is necessary. In the Rasch model,…
Descriptors: Test Items, Equated Scores, Test Bias, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Huggins-Manley, Anne Corinne – Educational and Psychological Measurement, 2017
This study defines subpopulation item parameter drift (SIPD) as a change in item parameters over time that is dependent on subpopulations of examinees, and hypothesizes that the presence of SIPD in anchor items is associated with bias and/or lack of invariance in three psychometric outcomes. Results show that SIPD in anchor items is associated…
Descriptors: Psychometrics, Test Items, Item Response Theory, Hypothesis Testing
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8