NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 9 results Save | Export
Phillips, Shane Michael – ProQuest LLC, 2012
Propensity score matching is a relatively new technique used in observational studies to approximate data that have been randomly assigned to treatment. This technique assimilates the values of several covariates into a single propensity score that is used as a matching variable to create similar groups. This dissertation comprises two separate…
Descriptors: Statistical Analysis, Educational Research, Simulation, Observation
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Eun Sook; Yoon, Myeongsun; Lee, Taehun – Educational and Psychological Measurement, 2012
Multiple-indicators multiple-causes (MIMIC) modeling is often used to test a latent group mean difference while assuming the equivalence of factor loadings and intercepts over groups. However, this study demonstrated that MIMIC was insensitive to the presence of factor loading noninvariance, which implies that factor loading invariance should be…
Descriptors: Test Items, Simulation, Testing, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Barakat, Bilal Fouad – International Journal of Educational Development, 2012
The number of years a child of school-entry age can expect to remain in school is of great interest both as a measure of individual human capital and of the performance of an education system. An approximate indicator of this concept is the sum of age-specific enrolment rates. The relatively low data demands of this indicator that are feasible to…
Descriptors: Human Capital, Measurement Techniques, Simulation, Evaluation Methods
Tian, Feng – ProQuest LLC, 2011
There has been a steady increase in the use of mixed-format tests, that is, tests consisting of both multiple-choice items and constructed-response items in both classroom and large-scale assessments. This calls for appropriate equating methods for such tests. As Item Response Theory (IRT) has rapidly become mainstream as the theoretical basis for…
Descriptors: Item Response Theory, Comparative Analysis, Equated Scores, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Won-Chan; Ban, Jae-Chun – Applied Measurement in Education, 2010
Various applications of item response theory often require linking to achieve a common scale for item parameter estimates obtained from different groups. This article used a simulation to examine the relative performance of four different item response theory (IRT) linking procedures in a random groups equating design: concurrent calibration with…
Descriptors: Item Response Theory, Simulation, Comparative Analysis, Measurement Techniques
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Wen-Chung; Shih, Ching-Lin; Yang, Chih-Chien – Educational and Psychological Measurement, 2009
This study implements a scale purification procedure onto the standard MIMIC method for differential item functioning (DIF) detection and assesses its performance through a series of simulations. It is found that the MIMIC method with scale purification (denoted as M-SP) outperforms the standard MIMIC method (denoted as M-ST) in controlling…
Descriptors: Test Items, Measures (Individuals), Test Bias, Evaluation Research
Peer reviewed Peer reviewed
Direct linkDirect link
Cools, Wilfried; De Fraine, Bieke; Van den Noortgate, Wim; Onghena, Patrick – School Effectiveness and School Improvement, 2009
In educational effectiveness research, multilevel data analyses are often used because research units (most frequently, pupils or teachers) are studied that are nested in groups (schools and classes). This hierarchical data structure complicates designing the study because the structure has to be taken into account when approximating the accuracy…
Descriptors: Effective Schools Research, Program Effectiveness, School Effectiveness, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
French, Brian F.; Finch, W. Holmes – Structural Equation Modeling: A Multidisciplinary Journal, 2008
Multigroup confirmatory factor analysis (MCFA) is a popular method for the examination of measurement invariance and specifically, factor invariance. Recent research has begun to focus on using MCFA to detect invariance for test items. MCFA requires certain parameters (e.g., factor loadings) to be constrained for model identification, which are…
Descriptors: Test Items, Simulation, Factor Structure, Factor Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Robin, Frédéric; van der Linden, Wim J.; Eignor, Daniel R.; Steffen, Manfred; Stocking, Martha L. – ETS Research Report Series, 2005
The relatively new shadow test approach (STA) to computerized adaptive testing (CAT) proposed by Wim van der Linden is a potentially attractive alternative to the weighted deviation algorithm (WDA) implemented at ETS. However, it has not been evaluated under testing conditions representative of current ETS testing programs. Of interest was whether…
Descriptors: Test Construction, Computer Assisted Testing, Simulation, Evaluation Methods