NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 8 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Schweizer, Karl; Reiß, Siegbert; Troche, Stefan – Educational and Psychological Measurement, 2019
The article reports three simulation studies conducted to find out whether the effect of a time limit for testing impairs model fit in investigations of structural validity, whether the representation of the assumed source of the effect prevents impairment of model fit and whether it is possible to identify and discriminate this method effect from…
Descriptors: Timed Tests, Testing, Barriers, Testing Problems
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Dimitrov, Dimiter M.; Marcoulides, George A.; Li, Tatyana; Menold, Natalja – Educational and Psychological Measurement, 2018
A latent variable modeling method for studying measurement invariance when evaluating latent constructs with multiple binary or binary scored items with no guessing is outlined. The approach extends the continuous indicator procedure described by Raykov and colleagues, utilizes similarly the false discovery rate approach to multiple testing, and…
Descriptors: Models, Statistical Analysis, Error of Measurement, Test Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Hsiao, Yu-Yu; Kwok, Oi-Man; Lai, Mark H. C. – Educational and Psychological Measurement, 2018
Path models with observed composites based on multiple items (e.g., mean or sum score of the items) are commonly used to test interaction effects. Under this practice, researchers generally assume that the observed composites are measured without errors. In this study, we reviewed and evaluated two alternative methods within the structural…
Descriptors: Error of Measurement, Testing, Scores, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Park, Ryoungsun; Kim, Jiseon; Chung, Hyewon; Dodd, Barbara G. – Educational and Psychological Measurement, 2017
The current study proposes novel methods to predict multistage testing (MST) performance without conducting simulations. This method, called MST test information, is based on analytic derivation of standard errors of ability estimates across theta levels. We compared standard errors derived analytically to the simulation results to demonstrate the…
Descriptors: Testing, Performance, Prediction, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Sideridis, Georgios; Tsaousis, Ioannis; Al Harbi, Khaleel – Educational and Psychological Measurement, 2017
The purpose of the present article was to illustrate, using an example from a national assessment, the value from analyzing the behavior of distractors in measures that engage the multiple-choice format. A secondary purpose of the present article was to illustrate four remedial actions that can potentially improve the measurement of the…
Descriptors: Multiple Choice Tests, Attention Control, Testing, Remedial Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Jihye; Oshima, T. C. – Educational and Psychological Measurement, 2013
In a typical differential item functioning (DIF) analysis, a significance test is conducted for each item. As a test consists of multiple items, such multiple testing may increase the possibility of making a Type I error at least once. The goal of this study was to investigate how to control a Type I error rate and power using adjustment…
Descriptors: Test Bias, Test Items, Statistical Analysis, Error of Measurement
Peer reviewed Peer reviewed
Cronbach, Lee J.; And Others – Educational and Psychological Measurement, 1997
Through the standard error, rather than a reliability coefficient, generalizability theory provides an indicator of the uncertainty attached to school and individual scores on performance assessments. Recommendations are made to apply generalizability theory to current performance assessments, emphasizing practices that differ from usual…
Descriptors: Academic Achievement, Error of Measurement, Generalizability Theory, Performance Based Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Chang, Shun-Wen – Educational and Psychological Measurement, 2006
This study evaluates the effects of employing the linear, normalizing, and arcsine transformation methods for constructing scale scores on the Basic Competence Test (BCTEST). Tests in three subject areas (Chinese, English, and Mathematics) were studied using the data of test administrations from 2001 to 2003. The resulting scale scores for each…
Descriptors: Standardized Tests, Achievement Tests, Test Theory, True Scores