NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Educational and Psychological…259
Audience
Practitioners1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 259 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Ken A. Fujimoto; Carl F. Falk – Educational and Psychological Measurement, 2024
Item response theory (IRT) models are often compared with respect to predictive performance to determine the dimensionality of rating scale data. However, such model comparisons could be biased toward nested-dimensionality IRT models (e.g., the bifactor model) when comparing those models with non-nested-dimensionality IRT models (e.g., a…
Descriptors: Item Response Theory, Rating Scales, Predictive Measurement, Bayesian Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Wind, Stefanie A. – Educational and Psychological Measurement, 2023
Rating scale analysis techniques provide researchers with practical tools for examining the degree to which ordinal rating scales (e.g., Likert-type scales or performance assessment rating scales) function in psychometrically useful ways. When rating scales function as expected, researchers can interpret ratings in the intended direction (i.e.,…
Descriptors: Rating Scales, Testing Problems, Item Response Theory, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Sijia; Luo, Jinwen; Cai, Li – Educational and Psychological Measurement, 2023
Random item effects item response theory (IRT) models, which treat both person and item effects as random, have received much attention for more than a decade. The random item effects approach has several advantages in many practical settings. The present study introduced an explanatory multidimensional random item effects rating scale model. The…
Descriptors: Rating Scales, Item Response Theory, Models, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Philippe Goldammer; Peter Lucas Stöckli; Yannik Andrea Escher; Hubert Annen; Klaus Jonas – Educational and Psychological Measurement, 2024
Indirect indices for faking detection in questionnaires make use of a respondent's deviant or unlikely response pattern over the course of the questionnaire to identify them as a faker. Compared with established direct faking indices (i.e., lying and social desirability scales), indirect indices have at least two advantages: First, they cannot be…
Descriptors: Identification, Deception, Psychological Testing, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Rebekka Kupffer; Susanne Frick; Eunike Wetzel – Educational and Psychological Measurement, 2024
The multidimensional forced-choice (MFC) format is an alternative to rating scales in which participants rank items according to how well the items describe them. Currently, little is known about how to detect careless responding in MFC data. The aim of this study was to adapt a number of indices used for rating scales to the MFC format and…
Descriptors: Measurement Techniques, Alternative Assessment, Rating Scales, Questionnaires
Peer reviewed Peer reviewed
Direct linkDirect link
Elliott, Mark; Buttery, Paula – Educational and Psychological Measurement, 2022
We investigate two non-iterative estimation procedures for Rasch models, the pair-wise estimation procedure (PAIR) and the Eigenvector method (EVM), and identify theoretical issues with EVM for rating scale model (RSM) threshold estimation. We develop a new procedure to resolve these issues--the conditional pairwise adjacent thresholds procedure…
Descriptors: Item Response Theory, Rating Scales, Computation, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
D'Urso, E. Damiano; Tijmstra, Jesper; Vermunt, Jeroen K.; De Roover, Kim – Educational and Psychological Measurement, 2023
Assessing the measurement model (MM) of self-report scales is crucial to obtain valid measurements of individuals' latent psychological constructs. This entails evaluating the number of measured constructs and determining which construct is measured by which item. Exploratory factor analysis (EFA) is the most-used method to evaluate these…
Descriptors: Factor Analysis, Measurement Techniques, Self Evaluation (Individuals), Psychological Patterns
Peer reviewed Peer reviewed
Direct linkDirect link
Jin, Kuan-Yu; Eckes, Thomas – Educational and Psychological Measurement, 2022
Performance assessments heavily rely on human ratings. These ratings are typically subject to various forms of error and bias, threatening the assessment outcomes' validity and fairness. Differential rater functioning (DRF) is a special kind of threat to fairness manifesting itself in unwanted interactions between raters and performance- or…
Descriptors: Performance Based Assessment, Rating Scales, Test Bias, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Bürkner, Paul-Christian; Schulte, Niklas; Holling, Heinz – Educational and Psychological Measurement, 2019
Forced-choice questionnaires have been proposed to avoid common response biases typically associated with rating scale questionnaires. To overcome ipsativity issues of trait scores obtained from classical scoring approaches of forced-choice items, advanced methods from item response theory (IRT) such as the Thurstonian IRT model have been…
Descriptors: Item Response Theory, Measurement Techniques, Questionnaires, Rating Scales
Peer reviewed Peer reviewed
Direct linkDirect link
Wind, Stefanie A.; Jones, Eli – Educational and Psychological Measurement, 2018
Previous research includes frequent admonitions regarding the importance of establishing connectivity in data collection designs prior to the application of Rasch models. However, details regarding the influence of characteristics of the linking sets used to establish connections among facets, such as locations on the latent variable, model-data…
Descriptors: Data Collection, Goodness of Fit, Computation, Networks
Peer reviewed Peer reviewed
Direct linkDirect link
Matlock Cole, Ki Lynn; Turner, Ronna C.; Gitchel, W. Dent – Educational and Psychological Measurement, 2018
The generalized partial credit model (GPCM) is often used for polytomous data; however, the nominal response model (NRM) allows for the investigation of how adjacent categories may discriminate differently when items are positively or negatively worded. Ten items from three different self-reported scales were used (anxiety, depression, and…
Descriptors: Item Response Theory, Anxiety, Depression (Psychology), Self Evaluation (Individuals)
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Nana; Bolt, Daniel M. – Educational and Psychological Measurement, 2021
This paper presents a mixture item response tree (IRTree) model for extreme response style. Unlike traditional applications of single IRTree models, a mixture approach provides a way of representing the mixture of respondents following different underlying response processes (between individuals), as well as the uncertainty present at the…
Descriptors: Item Response Theory, Response Style (Tests), Models, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Andrich, David – Educational and Psychological Measurement, 2016
This article reproduces correspondence between Georg Rasch of The University of Copenhagen and Benjamin Wright of The University of Chicago in the period from January 1966 to July 1967. This correspondence reveals their struggle to operationalize a unidimensional measurement model with sufficient statistics for responses in a set of ordered…
Descriptors: Statistics, Item Response Theory, Rating Scales, Mathematical Models
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Pohl, Steffi – Educational and Psychological Measurement, 2013
A procedure for examining essential unidimensionality in multicomponent measuring instruments is discussed. The method is based on an application of latent variable modeling and is concerned with the extent to which a common factor for all components of a given scale accounts for their correlations. The approach provides point and interval…
Descriptors: Measures (Individuals), Statistical Analysis, Factor Structure, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Attali, Yigal – Educational and Psychological Measurement, 2014
This article presents a comparative judgment approach for holistically scored constructed response tasks. In this approach, the grader rank orders (rather than rate) the quality of a small set of responses. A prior automated evaluation of responses guides both set formation and scaling of rankings. Sets are formed to have similar prior scores and…
Descriptors: Responses, Item Response Theory, Scores, Rating Scales
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  18