NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Educational and Psychological…43
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 43 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Yusuf Kara; Akihito Kamata; Xin Qiao; Cornelis J. Potgieter; Joseph F. T. Nese – Educational and Psychological Measurement, 2024
Words read correctly per minute (WCPM) is the reporting score metric in oral reading fluency (ORF) assessments, which is popularly utilized as part of curriculum-based measurements to screen at-risk readers and to monitor progress of students who receive interventions. Just like other types of assessments with multiple forms, equating would be…
Descriptors: Oral Reading, Reading Fluency, Models, Reading Rate
Peer reviewed Peer reviewed
Direct linkDirect link
Ting Sun; Stella Yun Kim – Educational and Psychological Measurement, 2024
Equating is a statistical procedure used to adjust for the difference in form difficulty such that scores on those forms can be used and interpreted comparably. In practice, however, equating methods are often implemented without considering the extent to which two forms differ in difficulty. The study aims to examine the effect of the magnitude…
Descriptors: Difficulty Level, Data Interpretation, Equated Scores, High School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Jiang, Zhehan; Han, Yuting; Xu, Lingling; Shi, Dexin; Liu, Ren; Ouyang, Jinying; Cai, Fen – Educational and Psychological Measurement, 2023
The part of responses that is absent in the nonequivalent groups with anchor test (NEAT) design can be managed to a planned missing scenario. In the context of small sample sizes, we present a machine learning (ML)-based imputation technique called chaining random forests (CRF) to perform equating tasks within the NEAT design. Specifically, seven…
Descriptors: Test Items, Equated Scores, Sample Size, Artificial Intelligence
Peer reviewed Peer reviewed
Direct linkDirect link
Fellinghauer, Carolina; Debelak, Rudolf; Strobl, Carolin – Educational and Psychological Measurement, 2023
This simulation study investigated to what extent departures from construct similarity as well as differences in the difficulty and targeting of scales impact the score transformation when scales are equated by means of concurrent calibration using the partial credit model with a common person design. Practical implications of the simulation…
Descriptors: True Scores, Equated Scores, Test Items, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Stella Y.; Lee, Won-Chan; Kolen, Michael J. – Educational and Psychological Measurement, 2020
A theoretical and conceptual framework for true-score equating using a simple-structure multidimensional item response theory (SS-MIRT) model is developed. A true-score equating method, referred to as the SS-MIRT true-score equating (SMT) procedure, also is developed. SS-MIRT has several advantages over other complex multidimensional item response…
Descriptors: Item Response Theory, Equated Scores, True Scores, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Dimitrov, Dimiter M.; Atanasov, Dimitar V. – Educational and Psychological Measurement, 2021
This study presents a latent (item response theory--like) framework of a recently developed classical approach to test scoring, equating, and item analysis, referred to as "D"-scoring method. Specifically, (a) person and item parameters are estimated under an item response function model on the "D"-scale (from 0 to 1) using…
Descriptors: Scoring, Equated Scores, Item Analysis, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Dowling, N. Maritza; Raykov, Tenko; Marcoulides, George A. – Educational and Psychological Measurement, 2020
Equating of psychometric scales and tests is frequently required and conducted in educational, behavioral, and clinical research. Construct comparability or equivalence between measuring instruments is a necessary condition for making decisions about linking and equating resulting scores. This article is concerned with a widely applicable method…
Descriptors: Evaluation Methods, Psychometrics, Screening Tests, Dementia
Peer reviewed Peer reviewed
Direct linkDirect link
Babcock, Ben; Hodge, Kari J. – Educational and Psychological Measurement, 2020
Equating and scaling in the context of small sample exams, such as credentialing exams for highly specialized professions, has received increased attention in recent research. Investigators have proposed a variety of both classical and Rasch-based approaches to the problem. This study attempts to extend past research by (1) directly comparing…
Descriptors: Item Response Theory, Equated Scores, Scaling, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Qiu, Yuxi; Huggins-Manley, Anne Corinne – Educational and Psychological Measurement, 2019
This study aimed to assess the accuracy of the empirical item characteristic curve (EICC) preequating method given the presence of test speededness. The simulation design of this study considered the proportion of speededness, speededness point, speededness rate, proportion of missing on speeded items, sample size, and test length. After crossing…
Descriptors: Accuracy, Equated Scores, Test Items, Nonparametric Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Preston, Kathleen Suzanne Johnson; Gottfried, Allen W.; Park, Jonathan J.; Manapat, Patrick Don; Gottfried, Adele Eskeles; Oliver, Pamella H. – Educational and Psychological Measurement, 2018
Measurement invariance is a prerequisite when comparing different groups of individuals or when studying a group of individuals across time. This assures that the same construct is assessed without measurement artifacts. This investigation applied a novel approach of simultaneous parameter linking to cross-sectional and longitudinal measures of…
Descriptors: Longitudinal Studies, Family Relationship, Measurement, Measures (Individuals)
Peer reviewed Peer reviewed
Direct linkDirect link
Dimitrov, Dimiter M. – Educational and Psychological Measurement, 2016
This article describes an approach to test scoring, referred to as "delta scoring" (D-scoring), for tests with dichotomously scored items. The D-scoring uses information from item response theory (IRT) calibration to facilitate computations and interpretations in the context of large-scale assessments. The D-score is computed from the…
Descriptors: Scoring, Equated Scores, Test Items, Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Huggins, Anne Corinne – Educational and Psychological Measurement, 2014
Invariant relationships in the internal mechanisms of estimating achievement scores on educational tests serve as the basis for concluding that a particular test is fair with respect to statistical bias concerns. Equating invariance and differential item functioning are both concerned with invariant relationships yet are treated separately in the…
Descriptors: Test Bias, Test Items, Equated Scores, Achievement Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Kopf, Julia; Zeileis, Achim; Strobl, Carolin – Educational and Psychological Measurement, 2015
Differential item functioning (DIF) indicates the violation of the invariance assumption, for instance, in models based on item response theory (IRT). For item-wise DIF analysis using IRT, a common metric for the item parameters of the groups that are to be compared (e.g., for the reference and the focal group) is necessary. In the Rasch model,…
Descriptors: Test Items, Equated Scores, Test Bias, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Huggins-Manley, Anne Corinne – Educational and Psychological Measurement, 2017
This study defines subpopulation item parameter drift (SIPD) as a change in item parameters over time that is dependent on subpopulations of examinees, and hypothesizes that the presence of SIPD in anchor items is associated with bias and/or lack of invariance in three psychometric outcomes. Results show that SIPD in anchor items is associated…
Descriptors: Psychometrics, Test Items, Item Response Theory, Hypothesis Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Preston, Kathleen Suzanne Johnson; Parral, Skye N.; Gottfried, Allen W.; Oliver, Pamella H.; Gottfried, Adele Eskeles; Ibrahim, Sirena M.; Delany, Danielle – Educational and Psychological Measurement, 2015
A psychometric analysis was conducted using the nominal response model under the item response theory framework to construct the Positive Family Relationships scale. Using data from the Fullerton Longitudinal Study, this scale was constructed within a long-term longitudinal framework spanning middle childhood through adolescence. Items tapping…
Descriptors: Family Relationship, Measures (Individuals), Psychometrics, Models
Previous Page | Next Page ยป
Pages: 1  |  2  |  3