Publication Date
In 2025 | 1 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 3 |
Since 2006 (last 20 years) | 7 |
Descriptor
Computer Software | 8 |
Error of Measurement | 8 |
Evaluation Methods | 8 |
Item Response Theory | 4 |
Simulation | 4 |
Computation | 3 |
Factor Analysis | 3 |
Measurement Techniques | 3 |
Accuracy | 2 |
Comparative Analysis | 2 |
Item Analysis | 2 |
More ▼ |
Source
Educational and Psychological… | 3 |
Applied Psychological… | 1 |
International Journal of… | 1 |
International Journal of… | 1 |
Psychological Methods | 1 |
Structural Equation Modeling:… | 1 |
Author
Publication Type
Journal Articles | 8 |
Reports - Research | 6 |
Reports - Descriptive | 1 |
Reports - Evaluative | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Manuel T. Rein; Jeroen K. Vermunt; Kim De Roover; Leonie V. D. E. Vogelsmeier – Structural Equation Modeling: A Multidisciplinary Journal, 2025
Researchers often study dynamic processes of latent variables in everyday life, such as the interplay of positive and negative affect over time. An intuitive approach is to first estimate the measurement model of the latent variables, then compute factor scores, and finally use these factor scores as observed scores in vector autoregressive…
Descriptors: Measurement Techniques, Factor Analysis, Scores, Validity
Guler, Gul; Cikrikci, Rahime Nukhet – International Journal of Assessment Tools in Education, 2022
The purpose of this study was to investigate the Type I Error findings and power rates of the methods used to determine dimensionality in unidimensional and bidimensional psychological constructs for various conditions (characteristic of the distribution, sample size, length of the test, and interdimensional correlation) and to examine the joint…
Descriptors: Comparative Analysis, Error of Measurement, Decision Making, Factor Analysis
Raykov, Tenko; Marcoulides, George A. – Educational and Psychological Measurement, 2018
This article outlines a procedure for examining the degree to which a common factor may be dominating additional factors in a multicomponent measuring instrument consisting of binary items. The procedure rests on an application of the latent variable modeling methodology and accounts for the discrete nature of the manifest indicators. The method…
Descriptors: Measurement Techniques, Factor Analysis, Item Response Theory, Likert Scales
Wang, Wen-Chung; Shih, Ching-Lin; Sun, Guo-Wei – Educational and Psychological Measurement, 2012
The DIF-free-then-DIF (DFTD) strategy consists of two steps: (a) select a set of items that are the most likely to be DIF-free and (b) assess the other items for DIF (differential item functioning) using the designated items as anchors. The rank-based method together with the computer software IRTLRDIF can select a set of DIF-free polytomous items…
Descriptors: Test Bias, Test Items, Item Response Theory, Evaluation Methods
Woods, Carol M. – Applied Psychological Measurement, 2011
Differential item functioning (DIF) occurs when an item on a test, questionnaire, or interview has different measurement properties for one group of people versus another, irrespective of true group-mean differences on the constructs being measured. This article is focused on item response theory based likelihood ratio testing for DIF (IRT-LR or…
Descriptors: Simulation, Item Response Theory, Testing, Questionnaires
Savalei, Victoria; Kolenikov, Stanislav – Psychological Methods, 2008
Recently, R. D. Stoel, F. G. Garre, C. Dolan, and G. van den Wittenboer (2006) reviewed approaches for obtaining reference mixture distributions for difference tests when a parameter is on the boundary. The authors of the present study argue that this methodology is incomplete without a discussion of when the mixtures are needed and show that they…
Descriptors: Structural Equation Models, Goodness of Fit, Evaluation Methods, Statistical Analysis
Kim, Seock-Ho – Educational and Psychological Measurement, 2007
The procedures required to obtain the approximate posterior standard deviations of the parameters in the three commonly used item response models for dichotomous items are described and used to generate values for some common situations. The results were compared with those obtained from maximum likelihood estimation. It is shown that the use of…
Descriptors: Item Response Theory, Computation, Comparative Analysis, Evaluation Methods
Dirkzwager, Arie – International Journal of Testing, 2003
The crux in psychometrics is how to estimate the probability that a respondent answers an item correctly on one occasion out of many. Under the current testing paradigm this probability is estimated using all kinds of statistical techniques and mathematical modeling. Multiple evaluation is a new testing paradigm using the person's own personal…
Descriptors: Psychometrics, Probability, Models, Measurement