Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 6 |
Descriptor
Source
Applied Psychological… | 40 |
Author
Andrich, David | 2 |
Abbott, Robert D. | 1 |
Alonso, Ariel | 1 |
Ashworth, Clark D. | 1 |
Barnes, Janet L. | 1 |
Bejar, Isaac I. | 1 |
Beretvas, S. Natasha | 1 |
Bhaumik, Dulal K. | 1 |
Blackman, Nicole J-M. | 1 |
Bock, R. Darrell | 1 |
Bolt, Daniel M. | 1 |
More ▼ |
Publication Type
Journal Articles | 28 |
Reports - Evaluative | 13 |
Reports - Research | 13 |
Reports - Descriptive | 3 |
Tests/Questionnaires | 1 |
Education Level
Higher Education | 1 |
Audience
Location
Canada (Toronto) | 1 |
Netherlands | 1 |
Singapore | 1 |
Wisconsin | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Bem Sex Role Inventory | 3 |
Eysenck Personality Inventory | 1 |
Minnesota Importance… | 1 |
National Assessment of… | 1 |
Personality Research Form | 1 |
What Works Clearinghouse Rating
Laenen, Annouschka; Alonso, Ariel; Molenberghs, Geert; Vangeneugden, Tony; Mallinckrodt, Craig H. – Applied Psychological Measurement, 2010
Longitudinal studies are permeating clinical trials in psychiatry. Therefore, it is of utmost importance to study the psychometric properties of rating scales, frequently used in these trials, within a longitudinal framework. However, intrasubject serial correlation and memory effects are problematic issues often encountered in longitudinal data.…
Descriptors: Psychiatry, Rating Scales, Memory, Psychometrics
Bolt, Daniel M.; Johnson, Timothy R. – Applied Psychological Measurement, 2009
A multidimensional item response theory model that accounts for response style factors is presented. The model, a multidimensional extension of Bock's nominal response model, is shown to allow for the study and control of response style effects in ordered rating scale data so as to reduce bias in measurement of the intended trait. In the current…
Descriptors: Response Style (Tests), Rating Scales, Item Response Theory, Individual Differences
Gibbons, Robert D.; Bock, R. Darrell; Hedeker, Donald; Weiss, David J.; Segawa, Eisuke; Bhaumik, Dulal K.; Kupfer, David J.; Frank, Ellen; Grochocinski, Victoria J.; Stover, Angela – Applied Psychological Measurement, 2007
A plausible factorial structure for many types of psychological and educational tests exhibits a general factor and one or more group or method factors. This structure can be represented by a bifactor model. The bifactor structure results from the constraint that each item has a nonzero loading on the primary dimension and, at most, one of the…
Descriptors: Factor Analysis, Item Response Theory, Computation, Factor Structure

Lindell, Michael K. – Applied Psychological Measurement, 2001
Developed an index for assessing interrater agreement with respect to a single target using a multi-item rating scale. The variance of rater mean scale scores is used as the numerator of the agreement index. Studied four variants of a disattenuated agreement index that vary in the random response term used as the denominator. (SLD)
Descriptors: Evaluation Methods, Interrater Reliability, Rating Scales
Williams, Natasha J.; Beretvas, S. Natasha – Applied Psychological Measurement, 2006
The relationship between the hierarchical generalized linear model (HGLM) and item response theory (IRT) models has been demonstrated for dichotomous items. The current study demonstrated the use of the HGLM for polytomous items (termed PHGLM) for identification of differential item functioning (DIF). First, the algebraic equivalence between…
Descriptors: Identification, Rating Scales, Test Items, Item Response Theory

Barnes, Janet L.; Landy, Frank J. – Applied Psychological Measurement, 1979
Although behaviorally anchored rating scales have both intuitive and empirical appeal, they have not always yielded superior results in contrast with graphic rating scales. Results indicate that the choice of an anchoring procedure will depend on the nature of the actual rating process. (Author/JKS)
Descriptors: Behavior Rating Scales, Comparative Testing, Higher Education, Rating Scales

Cicchetti, Domenic V.; Fleiss, Joseph L. – Applied Psychological Measurement, 1977
The weighted kappa coefficient is a measure of interrater agreement when the relative seriousness of each possible disagreement can be quantified. This monte carlo study demonstrates the utility of the kappa coefficient for ordinal data. Sample size is also briefly discussed. (Author/JKS)
Descriptors: Mathematical Models, Rating Scales, Reliability, Sampling

van Schuur, Wijbrandt H.; Kiers, Henk A. L. – Applied Psychological Measurement, 1994
The identification of two factors when one factor is expected is an artifact caused by using factor analysis on data that would be more appropriately analyzed with a unidimensional unfolding model. A numerical illustration is given, and ways to determine whether data conform to the unidimensional unfolding model are reviewed. (SLD)
Descriptors: Factor Analysis, Factor Structure, Matrices, Models

Meiser, Thorsten; And Others – Applied Psychological Measurement, 1995
The mixed Rasch model integrates Rasch and latent class approaches by dividing the population into classes that conform to Rasch models with class-specific parameters, enabling the modeling of qualitatively different patterns of change with the homogeneity assumption retained within, but not between, classes. An empirical example is given. (SLD)
Descriptors: Change, Comparative Analysis, Item Response Theory, Rating Scales
Lam, Tony C. M.; Kolic, Mary – Applied Psychological Measurement, 2008
Semantic incompatibility, an error in constructing measuring instruments for rating oneself, others, or objects, refers to the extent to which item wordings are incongruent with, and hence inappropriate for, scale labels and vice versa. This study examines the effects of semantic incompatibility on rating responses. Using a 2 x 2 factorial design…
Descriptors: Semantics, Rating Scales, Statistical Analysis, Academic Ability
Noel, Yvonnick; Dauvier, Bruno – Applied Psychological Measurement, 2007
An item response model is proposed for the analysis of continuous response formats in an item response theory (IRT) framework. With such formats, respondents are asked to report their response as a mark on a fixed-length graphical segment whose ends are labeled with extreme responses. An interpolation process is proposed as the response mechanism…
Descriptors: Simulation, Item Response Theory, Models, Responses

Andrich, David – Applied Psychological Measurement, 1978
When the logistic function is substituted for the normal, Thurstone's Case V specialization of the law of comparative judgment for paired comparison responses gives an identical equation for the estimation of item scale values, as does the Rasch formulation for direct responses. Comparisons are made. (Author/CTM)
Descriptors: Item Analysis, Latent Trait Theory, Mathematical Models, Rating Scales

Dawes, Robyn M. – Applied Psychological Measurement, 1977
Staff members of the Psychology department at the University of Oregon rated each other's height on five rating scales representative of those found in social psychology. Average ratings proved to be very good estimates of height. (Author/JKS)
Descriptors: College Faculty, Height, Males, Measurement Techniques

Bejar, Isaac I. – Applied Psychological Measurement, 1977
An application of Samejima's latent trait model for continuous responses was applied using the Impulsivity and Harmavoidance scales of Jackson's Personality Research Form; attention was given to the requirement that the model be invariant across populations and sex groups. Responses from males were found to fit the model better than those from…
Descriptors: Higher Education, Latent Trait Theory, Mathematical Models, Rating Scales

Schriesheim, Chester A.; And Others – Applied Psychological Measurement, 1989
LISREL maximum likelihood confirmatory factor analyses assessed the effects of grouped and random formats on convergent and discriminant validity of two sets of questionnaires--job characteristics scales and satisfaction measures--each administered to 80 college students. The grouped format was superior, and the usefulness of LISREL confirmatory…
Descriptors: College Students, Higher Education, Measures (Individuals), Questionnaires