Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 8 |
Descriptor
Evaluation Research | 9 |
Measurement Techniques | 9 |
Simulation | 9 |
Evaluation Methods | 7 |
Test Items | 6 |
Item Response Theory | 4 |
Statistical Analysis | 3 |
Comparative Analysis | 2 |
Educational Testing | 2 |
Equated Scores | 2 |
Factor Analysis | 2 |
More ▼ |
Source
Educational and Psychological… | 2 |
ProQuest LLC | 2 |
Applied Measurement in… | 1 |
ETS Research Report Series | 1 |
International Journal of… | 1 |
School Effectiveness and… | 1 |
Structural Equation Modeling:… | 1 |
Author
Ban, Jae-Chun | 1 |
Barakat, Bilal Fouad | 1 |
Cools, Wilfried | 1 |
De Fraine, Bieke | 1 |
Eignor, Daniel R. | 1 |
Finch, W. Holmes | 1 |
French, Brian F. | 1 |
Kim, Eun Sook | 1 |
Lee, Taehun | 1 |
Lee, Won-Chan | 1 |
Onghena, Patrick | 1 |
More ▼ |
Publication Type
Journal Articles | 7 |
Reports - Research | 5 |
Dissertations/Theses -… | 2 |
Reports - Evaluative | 2 |
Education Level
Elementary Secondary Education | 1 |
Audience
Location
Japan | 1 |
United States | 1 |
Laws, Policies, & Programs
Assessments and Surveys
ACT Assessment | 1 |
What Works Clearinghouse Rating
Phillips, Shane Michael – ProQuest LLC, 2012
Propensity score matching is a relatively new technique used in observational studies to approximate data that have been randomly assigned to treatment. This technique assimilates the values of several covariates into a single propensity score that is used as a matching variable to create similar groups. This dissertation comprises two separate…
Descriptors: Statistical Analysis, Educational Research, Simulation, Observation
Kim, Eun Sook; Yoon, Myeongsun; Lee, Taehun – Educational and Psychological Measurement, 2012
Multiple-indicators multiple-causes (MIMIC) modeling is often used to test a latent group mean difference while assuming the equivalence of factor loadings and intercepts over groups. However, this study demonstrated that MIMIC was insensitive to the presence of factor loading noninvariance, which implies that factor loading invariance should be…
Descriptors: Test Items, Simulation, Testing, Statistical Analysis
Barakat, Bilal Fouad – International Journal of Educational Development, 2012
The number of years a child of school-entry age can expect to remain in school is of great interest both as a measure of individual human capital and of the performance of an education system. An approximate indicator of this concept is the sum of age-specific enrolment rates. The relatively low data demands of this indicator that are feasible to…
Descriptors: Human Capital, Measurement Techniques, Simulation, Evaluation Methods
Tian, Feng – ProQuest LLC, 2011
There has been a steady increase in the use of mixed-format tests, that is, tests consisting of both multiple-choice items and constructed-response items in both classroom and large-scale assessments. This calls for appropriate equating methods for such tests. As Item Response Theory (IRT) has rapidly become mainstream as the theoretical basis for…
Descriptors: Item Response Theory, Comparative Analysis, Equated Scores, Statistical Analysis
Lee, Won-Chan; Ban, Jae-Chun – Applied Measurement in Education, 2010
Various applications of item response theory often require linking to achieve a common scale for item parameter estimates obtained from different groups. This article used a simulation to examine the relative performance of four different item response theory (IRT) linking procedures in a random groups equating design: concurrent calibration with…
Descriptors: Item Response Theory, Simulation, Comparative Analysis, Measurement Techniques
Wang, Wen-Chung; Shih, Ching-Lin; Yang, Chih-Chien – Educational and Psychological Measurement, 2009
This study implements a scale purification procedure onto the standard MIMIC method for differential item functioning (DIF) detection and assesses its performance through a series of simulations. It is found that the MIMIC method with scale purification (denoted as M-SP) outperforms the standard MIMIC method (denoted as M-ST) in controlling…
Descriptors: Test Items, Measures (Individuals), Test Bias, Evaluation Research
Cools, Wilfried; De Fraine, Bieke; Van den Noortgate, Wim; Onghena, Patrick – School Effectiveness and School Improvement, 2009
In educational effectiveness research, multilevel data analyses are often used because research units (most frequently, pupils or teachers) are studied that are nested in groups (schools and classes). This hierarchical data structure complicates designing the study because the structure has to be taken into account when approximating the accuracy…
Descriptors: Effective Schools Research, Program Effectiveness, School Effectiveness, Simulation
French, Brian F.; Finch, W. Holmes – Structural Equation Modeling: A Multidisciplinary Journal, 2008
Multigroup confirmatory factor analysis (MCFA) is a popular method for the examination of measurement invariance and specifically, factor invariance. Recent research has begun to focus on using MCFA to detect invariance for test items. MCFA requires certain parameters (e.g., factor loadings) to be constrained for model identification, which are…
Descriptors: Test Items, Simulation, Factor Structure, Factor Analysis
Robin, Frédéric; van der Linden, Wim J.; Eignor, Daniel R.; Steffen, Manfred; Stocking, Martha L. – ETS Research Report Series, 2005
The relatively new shadow test approach (STA) to computerized adaptive testing (CAT) proposed by Wim van der Linden is a potentially attractive alternative to the weighted deviation algorithm (WDA) implemented at ETS. However, it has not been evaluated under testing conditions representative of current ETS testing programs. Of interest was whether…
Descriptors: Test Construction, Computer Assisted Testing, Simulation, Evaluation Methods