NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 20 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Zhang, Jinming – Applied Psychological Measurement, 2012
It is common to assume during a statistical analysis of a multiscale assessment that the assessment is composed of several unidimensional subtests or that it has simple structure. Under this assumption, the unidimensional and multidimensional approaches can be used to estimate item parameters. These two approaches are equivalent in parameter…
Descriptors: Simulation, Computation, Models, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Woods, Carol M. – Applied Psychological Measurement, 2009
Differential item functioning (DIF) occurs when an item on a test or questionnaire has different measurement properties for one group of people versus another, irrespective of mean differences on the construct. There are many methods available for DIF assessment. The present article is focused on indices of partial association. A family of average…
Descriptors: Test Bias, Measurement, Correlation, Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Laenen, Annouschka; Alonso, Ariel; Molenberghs, Geert; Vangeneugden, Tony; Mallinckrodt, Craig H. – Applied Psychological Measurement, 2010
Longitudinal studies are permeating clinical trials in psychiatry. Therefore, it is of utmost importance to study the psychometric properties of rating scales, frequently used in these trials, within a longitudinal framework. However, intrasubject serial correlation and memory effects are problematic issues often encountered in longitudinal data.…
Descriptors: Psychiatry, Rating Scales, Memory, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Templin, Jonathan L.; Henson, Robert A.; Templin, Sara E.; Roussos, Louis – Applied Psychological Measurement, 2008
Several types of parameterizations of attribute correlations in cognitive diagnosis models use the reduced reparameterized unified model. The general approach presumes an unconstrained correlation matrix with K(K - 1)/2 parameters, whereas the higher order approach postulates K parameters, imposing a unidimensional structure on the correlation…
Descriptors: Factor Structure, Identification, Correlation, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
de la Torre, Jimmy; Song, Hao – Applied Psychological Measurement, 2009
Assessments consisting of different domains (e.g., content areas, objectives) are typically multidimensional in nature but are commonly assumed to be unidimensional for estimation purposes. The different domains of these assessments are further treated as multi-unidimensional tests for the purpose of obtaining diagnostic information. However, when…
Descriptors: Ability, Tests, Item Response Theory, Data Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
van Abswoude, Alexandra A. H.; van der Ark, L. Andries; Sijtsma, Klaas – Applied Psychological Measurement, 2004
In this article, an overview of nonparametric item response theory methods for determining the dimensionality of item response data is provided. Four methods were considered: MSP, DETECT, HCA/CCPROX, and DIMTEST. First, the methods were compared theoretically. Second, a simulation study was done to compare the effectiveness of MSP, DETECT, and…
Descriptors: Comparative Analysis, Computer Software, Simulation, Nonparametric Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
de la Torre, Jimmy – Applied Psychological Measurement, 2008
Recent work has shown that multidimensionally scoring responses from different tests can provide better ability estimates. For educational assessment data, applications of this approach have been limited to binary scores. Of the different variants, the de la Torre and Patz model is considered more general because implementing the scoring procedure…
Descriptors: Markov Processes, Scoring, Data Analysis, Item Response Theory
Peer reviewed Peer reviewed
Zimmerman, Donald W.; Williams, Richard H. – Applied Psychological Measurement, 2000
Restricted the range of nonnormal distributions by eliminating scores above a designated cutoff value or eliminating scores above or below the mean by a certain distance. Results of a simulation study show that range restriction sometimes increased the correlation between variables having outlier prone distributions. Discusses practical…
Descriptors: Correlation, Scores, Simulation, Statistical Distributions
Peer reviewed Peer reviewed
Fleiss, Joseph L.; Cicchetti, Domenic V. – Applied Psychological Measurement, 1978
The accuracy of the large sample standard error of weighted kappa appropriate to the non-null case was studied by computer simulation for the hypothesis that two independently derived estimates of weighted kappa are equal, and for setting confidence limits around a single value of weighted kappa. (Author/CTM)
Descriptors: Correlation, Hypothesis Testing, Nonparametric Statistics, Reliability
Peer reviewed Peer reviewed
Raju, Nambury S.; Brand, Paul A. – Applied Psychological Measurement, 2003
Proposed a new asymptotic formula for estimating the sampling variance of a correlation coefficient corrected for unreliability and range restriction. A Monte Carlo simulation study of the new formula results in several positive conclusions about the new approach. (SLD)
Descriptors: Correlation, Monte Carlo Methods, Reliability, Sampling
Peer reviewed Peer reviewed
Komaroff, Eugene – Applied Psychological Measurement, 1997
Evaluated coefficient alpha under violations of two classical test theory assumptions: essential tau-equivalence and uncorrelated errors through simulation. Discusses the interactive effects of both violations with true and error scores. Provides empirical evidence of the derivation of M. Novick and C. Lewis (1993). (SLD)
Descriptors: Correlation, Reliability, Simulation, Test Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Kluge, Annette – Applied Psychological Measurement, 2008
The use of microworlds (MWs), or complex dynamic systems, in educational testing and personnel selection is hampered by systematic measurement errors because these new and innovative item formats are not adequately controlled for their difficulty. This empirical study introduces a way to operationalize an MW's difficulty and demonstrates the…
Descriptors: Personnel Selection, Self Efficacy, Educational Testing, Computer Uses in Education
Peer reviewed Peer reviewed
Direct linkDirect link
Balazs, Katalin; Hidegkuti, Istvan; De Boeck, Paul – Applied Psychological Measurement, 2006
In the context of item response theory, it is not uncommon that person-by-item data are correlated beyond the correlation that is captured by the model--in other words, there is extra binomial variation. Heterogeneity of the parameters can explain this variation. There is a need for proper statistical methods to indicate possible extra…
Descriptors: Models, Regression (Statistics), Item Response Theory, Correlation
Peer reviewed Peer reviewed
Bost, James E. – Applied Psychological Measurement, 1995
Simulations demonstrate the effects of correlated errors on the person-by-occasion design in which the confounding effect of equal time intervals results in correlated error terms in the linear model. Two specific error correlation structures were examined. Conditions under which underestimation and overestimation occur are discussed. (SLD)
Descriptors: Analysis of Variance, Correlation, Estimation (Mathematics), Generalizability Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Kahraman, Nilufer; Kamata, Akihito – Applied Psychological Measurement, 2004
In this study, the precision of subscale score estimates was evaluated when out-of-scale information was incorporated. Procedures that incorporated out-of-scale information and only information within a subscale were compared through a series of simulations. It was revealed that more information (i.e., more precision) was always provided for…
Descriptors: Scores, Computation, Evaluation Methods, Simulation
Previous Page | Next Page ยป
Pages: 1  |  2