NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 21 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Deng, Nina; Han, Kyung T.; Hambleton, Ronald K. – Applied Psychological Measurement, 2013
DIMPACK Version 1.0 for assessing test dimensionality based on a nonparametric conditional covariance approach is reviewed. This software was originally distributed by Assessment Systems Corporation and now can be freely accessed online. The software consists of Windows-based interfaces of three components: DIMTEST, DETECT, and CCPROX/HAC, which…
Descriptors: Item Response Theory, Nonparametric Statistics, Statistical Analysis, Computer Software
Peer reviewed Peer reviewed
Direct linkDirect link
Finkelman, Matthew D.; Weiss, David J.; Kim-Kang, Gyenam – Applied Psychological Measurement, 2010
Assessing individual change is an important topic in both psychological and educational measurement. An adaptive measurement of change (AMC) method had previously been shown to exhibit greater efficiency in detecting change than conventional nonadaptive methods. However, little work had been done to compare different procedures within the AMC…
Descriptors: Computer Assisted Testing, Hypothesis Testing, Measurement, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Paek, Insu – Applied Psychological Measurement, 2010
Conservative bias in rejection of a null hypothesis from using the continuity correction in the Mantel-Haenszel (MH) procedure was examined through simulation in a differential item functioning (DIF) investigation context in which statistical testing uses a prespecified level [alpha] for the decision on an item with respect to DIF. The standard MH…
Descriptors: Test Bias, Statistical Analysis, Sample Size, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Finch, Holmes; Habing, Brian – Applied Psychological Measurement, 2007
This Monte Carlo study compares the ability of the parametric bootstrap version of DIMTEST with three goodness-of-fit tests calculated from a fitted NOHARM model to detect violations of the assumption of unidimensionality in testing data. The effectiveness of the procedures was evaluated for different numbers of items, numbers of examinees,…
Descriptors: Guessing (Tests), Testing, Statistics, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Froelich, Amy G.; Habing, Brian – Applied Psychological Measurement, 2008
DIMTEST is a nonparametric hypothesis-testing procedure designed to test the assumptions of a unidimensional and locally independent item response theory model. Several previous Monte Carlo studies have found that using linear factor analysis to select the assessment subtest for DIMTEST results in a moderate to severe loss of power when the exam…
Descriptors: Test Items, Monte Carlo Methods, Form Classes (Languages), Program Effectiveness
Peer reviewed Peer reviewed
Direct linkDirect link
Sotaridona, Leonardo S.; van der Linden, Wim J.; Meijer, Rob R. – Applied Psychological Measurement, 2006
A statistical test for detecting answer copying on multiple-choice tests based on Cohen's kappa is proposed. The test is free of any assumptions on the response processes of the examinees suspected of copying and having served as the source, except for the usual assumption that these processes are probabilistic. Because the asymptotic null and…
Descriptors: Cheating, Test Items, Simulation, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Monahan, Patrick O.; Stump, Timothy E.; Finch, Holmes; Hambleton, Ronald K. – Applied Psychological Measurement, 2007
DETECT is a nonparametric "full" dimensionality assessment procedure that clusters dichotomously scored items into dimensions and provides a DETECT index of magnitude of multidimensionality. Four factors (test length, sample size, item response theory [IRT] model, and DETECT index) were manipulated in a Monte Carlo study of bias, standard error,…
Descriptors: Test Length, Sample Size, Monte Carlo Methods, Geometric Concepts
Peer reviewed Peer reviewed
Fleiss, Joseph L.; Cicchetti, Domenic V. – Applied Psychological Measurement, 1978
The accuracy of the large sample standard error of weighted kappa appropriate to the non-null case was studied by computer simulation for the hypothesis that two independently derived estimates of weighted kappa are equal, and for setting confidence limits around a single value of weighted kappa. (Author/CTM)
Descriptors: Correlation, Hypothesis Testing, Nonparametric Statistics, Reliability
Peer reviewed Peer reviewed
Li, Jianmin; And Others – Applied Psychological Measurement, 1992
This computer program computes the adjusted alpha level using the original Bonferroni procedure and four modified Bonferroni procedures. It is written in Statistical Analysis System (SAS) macro language. Input and output features are described. (SLD)
Descriptors: Computer Software, Computer Software Evaluation, Hypothesis Testing
Peer reviewed Peer reviewed
Davison, Mark L.; And Others – Applied Psychological Measurement, 1995
General normal ogive and logistic multiple-group models for paired comparisons data are described. In these models, scale value and discriminal dispersion parameters are allowed to vary across stimuli and respondent populations. Model fitting and hypothesis testing are illustrated using health care coverage data from two age groups. (SLD)
Descriptors: Age Differences, Comparative Analysis, Hypothesis Testing, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Lam, Tony C. M.; Kolic, Mary – Applied Psychological Measurement, 2008
Semantic incompatibility, an error in constructing measuring instruments for rating oneself, others, or objects, refers to the extent to which item wordings are incongruent with, and hence inappropriate for, scale labels and vice versa. This study examines the effects of semantic incompatibility on rating responses. Using a 2 x 2 factorial design…
Descriptors: Semantics, Rating Scales, Statistical Analysis, Academic Ability
Peer reviewed Peer reviewed
Alsawalmeh, Yousef M.; Feldt, Leonard S. – Applied Psychological Measurement, 1992
An approximate statistical test is derived for the hypothesis that the intraclass reliability coefficients associated with two measurement procedures are equal. Control of Type 1 error is investigated by comparing empirical sampling distributions of the test statistic with its derived theoretical distribution. A numerical illustration is…
Descriptors: Equations (Mathematics), Hypothesis Testing, Mathematical Models, Measurement Techniques
Peer reviewed Peer reviewed
Bagozzi, Richard P.; Yi, Youjae – Applied Psychological Measurement, 1992
Research on the direct-product model is extended by deriving hierarchically nested models for explicitly testing the patterns of method and trait factors and through formal tests developed for the pattern of communalities. These procedures are illustrated, and use of the MUTMUM computer program is discussed. (SLD)
Descriptors: Construct Validity, Equations (Mathematics), Estimation (Mathematics), Hypothesis Testing
Peer reviewed Peer reviewed
Forsyth, Robert A. – Applied Psychological Measurement, 1978
This note shows that, under conditions specified by Levin and Subkoviak (TM 503 420), it is not necessary to specify the reliabilities of observed scores when comparing completely randomized designs with randomized block designs. Certain errors in their illustrative example are also discussed. (Author/CTM)
Descriptors: Analysis of Variance, Error of Measurement, Hypothesis Testing, Reliability
Peer reviewed Peer reviewed
Levin, Joel R.; Subkoviak, Michael J. – Applied Psychological Measurement, 1978
Comments (TM 503 706) on an earlier article (TM 503 420) concerning the comparison of the completely randomized design and the randomized block design are acknowledged and appreciated. In addition, potentially misleading notions arising from these comments are addressed and clarified. (See also TM 503 708). (Author/CTM)
Descriptors: Analysis of Variance, Error of Measurement, Hypothesis Testing, Reliability
Previous Page | Next Page ยป
Pages: 1  |  2