Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 5 |
Since 2016 (last 10 years) | 13 |
Since 2006 (last 20 years) | 20 |
Descriptor
Computer Software | 34 |
Item Response Theory | 15 |
Models | 11 |
Computation | 8 |
Statistical Analysis | 8 |
Test Items | 8 |
Accuracy | 5 |
Bayesian Statistics | 5 |
Comparative Analysis | 5 |
Correlation | 5 |
Foreign Countries | 5 |
More ▼ |
Source
Educational and Psychological… | 34 |
Author
Luo, Yong | 3 |
Wang, Wen-Chung | 3 |
Berry, Kenneth J. | 2 |
Mielke, Paul W., Jr. | 2 |
Aiken, Lewis R. | 1 |
Alexander, Ralph A. | 1 |
Benson, Jeri | 1 |
Chen, Hui-Fang | 1 |
D'Urso, E. Damiano | 1 |
De Roover, Kim | 1 |
DeMars, Christine E. | 1 |
More ▼ |
Publication Type
Journal Articles | 34 |
Reports - Research | 34 |
Numerical/Quantitative Data | 1 |
Tests/Questionnaires | 1 |
Education Level
Audience
Location
Germany | 1 |
Hong Kong | 1 |
Saudi Arabia | 1 |
Taiwan | 1 |
United States | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Trends in International… | 3 |
Program for International… | 2 |
Computer Anxiety Scale | 1 |
Coopersmith Self Esteem… | 1 |
Self Perception Profile for… | 1 |
Students Evaluation of… | 1 |
What Works Clearinghouse Rating
Sideridis, Georgios; Tsaousis, Ioannis; Ghamdi, Hanan – Educational and Psychological Measurement, 2023
The purpose of the present study was to provide the means to evaluate the "interval-scaling" assumption that governs the use of parametric statistics and continuous data estimators in self-report instruments that utilize Likert-type scaling. Using simulated and real data, the methodology to test for this important assumption is evaluated…
Descriptors: Intervals, Scaling, Computer Software, Likert Scales
Ö. Emre C. Alagöz; Thorsten Meiser – Educational and Psychological Measurement, 2024
To improve the validity of self-report measures, researchers should control for response style (RS) effects, which can be achieved with IRTree models. A traditional IRTree model considers a response as a combination of distinct decision-making processes, where the substantive trait affects the decision on response direction, while decisions about…
Descriptors: Item Response Theory, Validity, Self Evaluation (Individuals), Decision Making
Evaluation of Variance Inflation Factors in Regression Models Using Latent Variable Modeling Methods
Marcoulides, Katerina M.; Raykov, Tenko – Educational and Psychological Measurement, 2019
A procedure that can be used to evaluate the variance inflation factors and tolerance indices in linear regression models is discussed. The method permits both point and interval estimation of these factors and indices associated with explanatory variables considered for inclusion in a regression model. The approach makes use of popular latent…
Descriptors: Regression (Statistics), Statistical Analysis, Computation, Computer Software
Nagy, Gabriel; Ulitzsch, Esther – Educational and Psychological Measurement, 2022
Disengaged item responses pose a threat to the validity of the results provided by large-scale assessments. Several procedures for identifying disengaged responses on the basis of observed response times have been suggested, and item response theory (IRT) models for response engagement have been proposed. We outline that response time-based…
Descriptors: Item Response Theory, Hierarchical Linear Modeling, Predictor Variables, Classification
D'Urso, E. Damiano; Tijmstra, Jesper; Vermunt, Jeroen K.; De Roover, Kim – Educational and Psychological Measurement, 2023
Assessing the measurement model (MM) of self-report scales is crucial to obtain valid measurements of individuals' latent psychological constructs. This entails evaluating the number of measured constructs and determining which construct is measured by which item. Exploratory factor analysis (EFA) is the most-used method to evaluate these…
Descriptors: Factor Analysis, Measurement Techniques, Self Evaluation (Individuals), Psychological Patterns
Luo, Yong – Educational and Psychological Measurement, 2018
Mplus is a powerful latent variable modeling software program that has become an increasingly popular choice for fitting complex item response theory models. In this short note, we demonstrate that the two-parameter logistic testlet model can be estimated as a constrained bifactor model in Mplus with three estimators encompassing limited- and…
Descriptors: Computer Software, Models, Statistical Analysis, Computation
Fikis, David R. J.; Oshima, T. C. – Educational and Psychological Measurement, 2017
Purification of the test has been a well-accepted procedure in enhancing the performance of tests for differential item functioning (DIF). As defined by Lord, purification requires reestimation of ability parameters after removing DIF items before conducting the final DIF analysis. IRTPRO 3 is a recently updated program for analyses in item…
Descriptors: Test Bias, Item Response Theory, Statistical Analysis, Computer Software
von Davier, Matthias; Tyack, Lillian; Khorramdel, Lale – Educational and Psychological Measurement, 2023
Automated scoring of free drawings or images as responses has yet to be used in large-scale assessments of student achievement. In this study, we propose artificial neural networks to classify these types of graphical responses from a TIMSS 2019 item. We are comparing classification accuracy of convolutional and feed-forward approaches. Our…
Descriptors: Scoring, Networks, Artificial Intelligence, Elementary Secondary Education
Isiordia, Marilu; Ferrer, Emilio – Educational and Psychological Measurement, 2018
A first-order latent growth model assesses change in an unobserved construct from a single score and is commonly used across different domains of educational research. However, examining change using a set of multiple response scores (e.g., scale items) affords researchers several methodological benefits not possible when using a single score. A…
Descriptors: Educational Research, Statistical Analysis, Models, Longitudinal Studies
Luo, Yong; Dimitrov, Dimiter M. – Educational and Psychological Measurement, 2019
Plausible values can be used to either estimate population-level statistics or compute point estimates of latent variables. While it is well known that five plausible values are usually sufficient for accurate estimation of population-level statistics in large-scale surveys, the minimum number of plausible values needed to obtain accurate latent…
Descriptors: Item Response Theory, Monte Carlo Methods, Markov Processes, Outcome Measures
Yavuz, Guler; Hambleton, Ronald K. – Educational and Psychological Measurement, 2017
Application of MIRT modeling procedures is dependent on the quality of parameter estimates provided by the estimation software and techniques used. This study investigated model parameter recovery of two popular MIRT packages, BMIRT and flexMIRT, under some common measurement conditions. These packages were specifically selected to investigate the…
Descriptors: Item Response Theory, Models, Comparative Analysis, Computer Software
Luo, Yong; Jiao, Hong – Educational and Psychological Measurement, 2018
Stan is a new Bayesian statistical software program that implements the powerful and efficient Hamiltonian Monte Carlo (HMC) algorithm. To date there is not a source that systematically provides Stan code for various item response theory (IRT) models. This article provides Stan code for three representative IRT models, including the…
Descriptors: Bayesian Statistics, Item Response Theory, Probability, Computer Software
Preston, Kathleen Suzanne Johnson; Reise, Steven Paul – Educational and Psychological Measurement, 2014
The nominal response model (NRM), a much understudied polytomous item response theory (IRT) model, provides researchers the unique opportunity to evaluate within-item category distinctions. Polytomous IRT models, such as the NRM, are frequently applied to psychological assessments representing constructs that are unlikely to be normally…
Descriptors: Item Response Theory, Computation, Models, Accuracy
Zehner, Fabian; Sälzer, Christine; Goldhammer, Frank – Educational and Psychological Measurement, 2016
Automatic coding of short text responses opens new doors in assessment. We implemented and integrated baseline methods of natural language processing and statistical modelling by means of software components that are available under open licenses. The accuracy of automatic text coding is demonstrated by using data collected in the "Programme…
Descriptors: Educational Assessment, Coding, Automation, Responses
Wang, Wen-Chung; Chen, Hui-Fang; Jin, Kuan-Yu – Educational and Psychological Measurement, 2015
Many scales contain both positively and negatively worded items. Reverse recoding of negatively worded items might not be enough for them to function as positively worded items do. In this study, we commented on the drawbacks of existing approaches to wording effect in mixed-format scales and used bi-factor item response theory (IRT) models to…
Descriptors: Item Response Theory, Test Format, Language Usage, Test Items