Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 4 |
Descriptor
Computer Software | 7 |
Simulation | 5 |
Item Response Theory | 4 |
Evaluation Methods | 3 |
Accuracy | 2 |
Computer Simulation | 2 |
Error of Measurement | 2 |
Measurement Techniques | 2 |
Models | 2 |
Psychometrics | 2 |
Response Style (Tests) | 2 |
More ▼ |
Source
Educational and Psychological… | 7 |
Author
Alexander, Ralph A. | 1 |
D'Urso, E. Damiano | 1 |
De Corte, Wilfried | 1 |
De Roover, Kim | 1 |
DeMars, Christine E. | 1 |
DeShon, Richard P. | 1 |
Furlow, Carolyn | 1 |
Gagne, Phill | 1 |
Ross, Terris | 1 |
Shih, Ching-Lin | 1 |
Sun, Guo-Wei | 1 |
More ▼ |
Publication Type
Journal Articles | 7 |
Reports - Research | 5 |
Reports - Descriptive | 2 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Ö. Emre C. Alagöz; Thorsten Meiser – Educational and Psychological Measurement, 2024
To improve the validity of self-report measures, researchers should control for response style (RS) effects, which can be achieved with IRTree models. A traditional IRTree model considers a response as a combination of distinct decision-making processes, where the substantive trait affects the decision on response direction, while decisions about…
Descriptors: Item Response Theory, Validity, Self Evaluation (Individuals), Decision Making
D'Urso, E. Damiano; Tijmstra, Jesper; Vermunt, Jeroen K.; De Roover, Kim – Educational and Psychological Measurement, 2023
Assessing the measurement model (MM) of self-report scales is crucial to obtain valid measurements of individuals' latent psychological constructs. This entails evaluating the number of measured constructs and determining which construct is measured by which item. Exploratory factor analysis (EFA) is the most-used method to evaluate these…
Descriptors: Factor Analysis, Measurement Techniques, Self Evaluation (Individuals), Psychological Patterns
Wang, Wen-Chung; Shih, Ching-Lin; Sun, Guo-Wei – Educational and Psychological Measurement, 2012
The DIF-free-then-DIF (DFTD) strategy consists of two steps: (a) select a set of items that are the most likely to be DIF-free and (b) assess the other items for DIF (differential item functioning) using the designated items as anchors. The rank-based method together with the computer software IRTLRDIF can select a set of DIF-free polytomous items…
Descriptors: Test Bias, Test Items, Item Response Theory, Evaluation Methods
Gagne, Phill; Furlow, Carolyn; Ross, Terris – Educational and Psychological Measurement, 2009
In item response theory (IRT) simulation research, it is often necessary to use one software package for data generation and a second software package to conduct the IRT analysis. Because this can substantially slow down the simulation process, it is sometimes offered as a justification for using very few replications. This article provides…
Descriptors: Item Response Theory, Simulation, Computer Software, Automation

DeShon, Richard P.; Alexander, Ralph A. – Educational and Psychological Measurement, 1994
James's second-order approximation for testing the equality of "k" independent means under heterogeneity of variance may be adapted to the test for the equality of "k" independent regression slopes under heterogeneity of error variance. Performance of the approximation is evaluated and availability of computer programs is…
Descriptors: Computer Software, Equations (Mathematics), Regression (Statistics), Simulation
De Corte, Wilfried – Educational and Psychological Measurement, 2004
The article describes a Windows program to estimate the expected value and sampling distribution function of the adverse impact ratio for general multistage selections. The results of the program can also be used to predict the risk that a future selection decision will result in an outcome that reflects the presence of adverse impact. The method…
Descriptors: Sampling, Measurement Techniques, Evaluation Methods, Computer Software
DeMars, Christine E. – Educational and Psychological Measurement, 2005
Type I error rates for PARSCALE's fit statistic were examined. Data were generated to fit the partial credit or graded response model, with test lengths of 10 or 20 items. The ability distribution was simulated to be either normal or uniform. Type I error rates were inflated for the shorter test length and, for the graded-response model, also for…
Descriptors: Test Length, Item Response Theory, Psychometrics, Error of Measurement