NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 40 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Laenen, Annouschka; Alonso, Ariel; Molenberghs, Geert; Vangeneugden, Tony; Mallinckrodt, Craig H. – Applied Psychological Measurement, 2010
Longitudinal studies are permeating clinical trials in psychiatry. Therefore, it is of utmost importance to study the psychometric properties of rating scales, frequently used in these trials, within a longitudinal framework. However, intrasubject serial correlation and memory effects are problematic issues often encountered in longitudinal data.…
Descriptors: Psychiatry, Rating Scales, Memory, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Bolt, Daniel M.; Johnson, Timothy R. – Applied Psychological Measurement, 2009
A multidimensional item response theory model that accounts for response style factors is presented. The model, a multidimensional extension of Bock's nominal response model, is shown to allow for the study and control of response style effects in ordered rating scale data so as to reduce bias in measurement of the intended trait. In the current…
Descriptors: Response Style (Tests), Rating Scales, Item Response Theory, Individual Differences
Peer reviewed Peer reviewed
Direct linkDirect link
Gibbons, Robert D.; Bock, R. Darrell; Hedeker, Donald; Weiss, David J.; Segawa, Eisuke; Bhaumik, Dulal K.; Kupfer, David J.; Frank, Ellen; Grochocinski, Victoria J.; Stover, Angela – Applied Psychological Measurement, 2007
A plausible factorial structure for many types of psychological and educational tests exhibits a general factor and one or more group or method factors. This structure can be represented by a bifactor model. The bifactor structure results from the constraint that each item has a nonzero loading on the primary dimension and, at most, one of the…
Descriptors: Factor Analysis, Item Response Theory, Computation, Factor Structure
Peer reviewed Peer reviewed
Lindell, Michael K. – Applied Psychological Measurement, 2001
Developed an index for assessing interrater agreement with respect to a single target using a multi-item rating scale. The variance of rater mean scale scores is used as the numerator of the agreement index. Studied four variants of a disattenuated agreement index that vary in the random response term used as the denominator. (SLD)
Descriptors: Evaluation Methods, Interrater Reliability, Rating Scales
Peer reviewed Peer reviewed
Direct linkDirect link
Williams, Natasha J.; Beretvas, S. Natasha – Applied Psychological Measurement, 2006
The relationship between the hierarchical generalized linear model (HGLM) and item response theory (IRT) models has been demonstrated for dichotomous items. The current study demonstrated the use of the HGLM for polytomous items (termed PHGLM) for identification of differential item functioning (DIF). First, the algebraic equivalence between…
Descriptors: Identification, Rating Scales, Test Items, Item Response Theory
Peer reviewed Peer reviewed
Barnes, Janet L.; Landy, Frank J. – Applied Psychological Measurement, 1979
Although behaviorally anchored rating scales have both intuitive and empirical appeal, they have not always yielded superior results in contrast with graphic rating scales. Results indicate that the choice of an anchoring procedure will depend on the nature of the actual rating process. (Author/JKS)
Descriptors: Behavior Rating Scales, Comparative Testing, Higher Education, Rating Scales
Peer reviewed Peer reviewed
Cicchetti, Domenic V.; Fleiss, Joseph L. – Applied Psychological Measurement, 1977
The weighted kappa coefficient is a measure of interrater agreement when the relative seriousness of each possible disagreement can be quantified. This monte carlo study demonstrates the utility of the kappa coefficient for ordinal data. Sample size is also briefly discussed. (Author/JKS)
Descriptors: Mathematical Models, Rating Scales, Reliability, Sampling
Peer reviewed Peer reviewed
van Schuur, Wijbrandt H.; Kiers, Henk A. L. – Applied Psychological Measurement, 1994
The identification of two factors when one factor is expected is an artifact caused by using factor analysis on data that would be more appropriately analyzed with a unidimensional unfolding model. A numerical illustration is given, and ways to determine whether data conform to the unidimensional unfolding model are reviewed. (SLD)
Descriptors: Factor Analysis, Factor Structure, Matrices, Models
Peer reviewed Peer reviewed
Meiser, Thorsten; And Others – Applied Psychological Measurement, 1995
The mixed Rasch model integrates Rasch and latent class approaches by dividing the population into classes that conform to Rasch models with class-specific parameters, enabling the modeling of qualitatively different patterns of change with the homogeneity assumption retained within, but not between, classes. An empirical example is given. (SLD)
Descriptors: Change, Comparative Analysis, Item Response Theory, Rating Scales
Peer reviewed Peer reviewed
Direct linkDirect link
Lam, Tony C. M.; Kolic, Mary – Applied Psychological Measurement, 2008
Semantic incompatibility, an error in constructing measuring instruments for rating oneself, others, or objects, refers to the extent to which item wordings are incongruent with, and hence inappropriate for, scale labels and vice versa. This study examines the effects of semantic incompatibility on rating responses. Using a 2 x 2 factorial design…
Descriptors: Semantics, Rating Scales, Statistical Analysis, Academic Ability
Peer reviewed Peer reviewed
Direct linkDirect link
Noel, Yvonnick; Dauvier, Bruno – Applied Psychological Measurement, 2007
An item response model is proposed for the analysis of continuous response formats in an item response theory (IRT) framework. With such formats, respondents are asked to report their response as a mark on a fixed-length graphical segment whose ends are labeled with extreme responses. An interpolation process is proposed as the response mechanism…
Descriptors: Simulation, Item Response Theory, Models, Responses
Peer reviewed Peer reviewed
Andrich, David – Applied Psychological Measurement, 1978
When the logistic function is substituted for the normal, Thurstone's Case V specialization of the law of comparative judgment for paired comparison responses gives an identical equation for the estimation of item scale values, as does the Rasch formulation for direct responses. Comparisons are made. (Author/CTM)
Descriptors: Item Analysis, Latent Trait Theory, Mathematical Models, Rating Scales
Peer reviewed Peer reviewed
Dawes, Robyn M. – Applied Psychological Measurement, 1977
Staff members of the Psychology department at the University of Oregon rated each other's height on five rating scales representative of those found in social psychology. Average ratings proved to be very good estimates of height. (Author/JKS)
Descriptors: College Faculty, Height, Males, Measurement Techniques
Peer reviewed Peer reviewed
Bejar, Isaac I. – Applied Psychological Measurement, 1977
An application of Samejima's latent trait model for continuous responses was applied using the Impulsivity and Harmavoidance scales of Jackson's Personality Research Form; attention was given to the requirement that the model be invariant across populations and sex groups. Responses from males were found to fit the model better than those from…
Descriptors: Higher Education, Latent Trait Theory, Mathematical Models, Rating Scales
Peer reviewed Peer reviewed
Schriesheim, Chester A.; And Others – Applied Psychological Measurement, 1989
LISREL maximum likelihood confirmatory factor analyses assessed the effects of grouped and random formats on convergent and discriminant validity of two sets of questionnaires--job characteristics scales and satisfaction measures--each administered to 80 college students. The grouped format was superior, and the usefulness of LISREL confirmatory…
Descriptors: College Students, Higher Education, Measures (Individuals), Questionnaires
Previous Page | Next Page ยป
Pages: 1  |  2  |  3