NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Educational and Psychological…59
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 59 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Pusic, Martin – Educational and Psychological Measurement, 2023
This note is concerned with evaluation of location parameters for polytomous items in multiple-component measuring instruments. A point and interval estimation procedure for these parameters is outlined that is developed within the framework of latent variable modeling. The method permits educational, behavioral, biomedical, and marketing…
Descriptors: Item Analysis, Measurement Techniques, Computer Software, Intervals
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Al-Qataee, Abdullah A.; Dimitrov, Dimiter M. – Educational and Psychological Measurement, 2020
A procedure for evaluation of validity related coefficients and their differences is discussed, which is applicable when one or more frequently used assumptions in empirical educational, behavioral and social research are violated. The method is developed within the framework of the latent variable modeling methodology and accomplishes point and…
Descriptors: Validity, Evaluation Methods, Social Science Research, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Dowling, N. Maritza; Raykov, Tenko; Marcoulides, George A. – Educational and Psychological Measurement, 2020
Equating of psychometric scales and tests is frequently required and conducted in educational, behavioral, and clinical research. Construct comparability or equivalence between measuring instruments is a necessary condition for making decisions about linking and equating resulting scores. This article is concerned with a widely applicable method…
Descriptors: Evaluation Methods, Psychometrics, Screening Tests, Dementia
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Marcoulides, George A.; Li, Tenglong – Educational and Psychological Measurement, 2017
The measurement error in principal components extracted from a set of fallible measures is discussed and evaluated. It is shown that as long as one or more measures in a given set of observed variables contains error of measurement, so also does any principal component obtained from the set. The error variance in any principal component is shown…
Descriptors: Error of Measurement, Factor Analysis, Research Methodology, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Wilcox, Rand R.; Serang, Sarfaraz – Educational and Psychological Measurement, 2017
The article provides perspectives on p values, null hypothesis testing, and alternative techniques in light of modern robust statistical methods. Null hypothesis testing and "p" values can provide useful information provided they are interpreted in a sound manner, which includes taking into account insights and advances that have…
Descriptors: Hypothesis Testing, Bayesian Statistics, Computation, Effect Size
Peer reviewed Peer reviewed
Direct linkDirect link
Cousineau, Denis; Laurencelle, Louis – Educational and Psychological Measurement, 2017
Assessing global interrater agreement is difficult as most published indices are affected by the presence of mixtures of agreements and disagreements. A previously proposed method was shown to be specifically sensitive to global agreement, excluding mixtures, but also negatively biased. Here, we propose two alternatives in an attempt to find what…
Descriptors: Interrater Reliability, Evaluation Methods, Statistical Bias, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Dimitrov, Dimiter M.; von Eye, Alexander; Marcoulides, George A. – Educational and Psychological Measurement, 2013
A latent variable modeling method for evaluation of interrater agreement is outlined. The procedure is useful for point and interval estimation of the degree of agreement among a given set of judges evaluating a group of targets. In addition, the approach allows one to test for identity in underlying thresholds across raters as well as to identify…
Descriptors: Interrater Reliability, Models, Statistical Analysis, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Debelak, Rudolf; Arendasy, Martin – Educational and Psychological Measurement, 2012
A new approach to identify item clusters fitting the Rasch model is described and evaluated using simulated and real data. The proposed method is based on hierarchical cluster analysis and constructs clusters of items that show a good fit to the Rasch model. It thus gives an estimate of the number of independent scales satisfying the postulates of…
Descriptors: Test Items, Factor Analysis, Evaluation Methods, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Jiao, Hong; Liu, Junhui; Haynie, Kathleen; Woo, Ada; Gorham, Jerry – Educational and Psychological Measurement, 2012
This study explored the impact of partial credit scoring of one type of innovative items (multiple-response items) in a computerized adaptive version of a large-scale licensure pretest and operational test settings. The impacts of partial credit scoring on the estimation of the ability parameters and classification decisions in operational test…
Descriptors: Test Items, Computer Assisted Testing, Measures (Individuals), Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Choi, Seung W.; Grady, Matthew W.; Dodd, Barbara G. – Educational and Psychological Measurement, 2011
The goal of the current study was to introduce a new stopping rule for computerized adaptive testing (CAT). The predicted standard error reduction (PSER) stopping rule uses the predictive posterior variance to determine the reduction in standard error that would result from the administration of additional items. The performance of the PSER was…
Descriptors: Item Banks, Adaptive Testing, Computer Assisted Testing, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Finch, Holmes; Davis, Andrew; Dean, Raymond S. – Educational and Psychological Measurement, 2010
The current study examined the measurement invariance of the Dean-Woodcock Sensory-Motor Battery (DWSMB) for children diagnosed with attention deficit hyperactivity disorder (ADHD) and an age- and gender-matched nonclinical sample. The DWSMB is a promising new instrument for assessing a wide range of cortical and subcortical sensory and motor…
Descriptors: Attention Deficit Hyperactivity Disorder, Comparative Analysis, Screening Tests, Neurological Impairments
Peer reviewed Peer reviewed
Direct linkDirect link
Chang, Shu-Ren; Plake, Barbara S.; Kramer, Gene A.; Lien, Shu-Mei – Educational and Psychological Measurement, 2011
This study examined the amount of time that different ability-level examinees spend on questions they answer correctly or incorrectly across different pretest item blocks presented on a fixed-length, time-restricted computerized adaptive testing (CAT). Results indicate that different ability-level examinees require different amounts of time to…
Descriptors: Evidence, Test Items, Reaction Time, Adaptive Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Stuive, Ilse; Kiers, Henk A. L.; Timmerman, Marieke E. – Educational and Psychological Measurement, 2009
A common question in test evaluation is whether an a priori assignment of items to subtests is supported by empirical data. If the analysis results indicate the assignment of items to subtests under study is not supported by data, the assignment is often adjusted. In this study the authors compare two methods on the quality of their suggestions to…
Descriptors: Simulation, Item Response Theory, Test Items, Factor Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Cho, Sun-Joo; Li, Feiming; Bandalos, Deborah – Educational and Psychological Measurement, 2009
The purpose of this study was to investigate the application of the parallel analysis (PA) method for choosing the number of factors in component analysis for situations in which data are dichotomous or ordinal. Although polychoric correlations are sometimes used as input for component analyses, the random data matrices generated for use in PA…
Descriptors: Correlation, Evaluation Methods, Data Analysis, Matrices
Peer reviewed Peer reviewed
Direct linkDirect link
Vasilyeva, Marina; Ludlow, Larry H.; Casey, Beth M.; Onge, Caroline St. – Educational and Psychological Measurement, 2009
This article introduces the Measurement Skills Assessment (MeSA), which was designed to evaluate the mastery of measurement in elementary school students. The primary objectives for the MeSA include covering a broad range of measurement concepts, distinguishing between major subtypes of measurement, and constructing a continuum of items varying in…
Descriptors: Elementary School Students, Psychometrics, Measurement Techniques, Measures (Individuals)
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4