NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 4,636 to 4,650 of 9,552 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Maij-de Meij, Annette M.; Kelderman, Henk; van der Flier, Henk – Applied Psychological Measurement, 2008
Mixture item response theory (IRT) models aid the interpretation of response behavior on personality tests and may provide possibilities for improving prediction. Heterogeneity in the population is modeled by identifying homogeneous subgroups that conform to different measurement models. In this study, mixture IRT models were applied to the…
Descriptors: Test Items, Social Desirability, Form Classes (Languages), Prediction
Peer reviewed Peer reviewed
Direct linkDirect link
Yi, Qing; Zhang, Jinming; Chang, Hua-Hua – Applied Psychological Measurement, 2008
Criteria had been proposed for assessing the severity of possible test security violations for computerized tests with high-stakes outcomes. However, these criteria resulted from theoretical derivations that assumed uniformly randomized item selection. This study investigated potential damage caused by organized item theft in computerized adaptive…
Descriptors: Test Items, Simulation, Item Analysis, Safety
Peer reviewed Peer reviewed
Direct linkDirect link
Vock, Miriam; Holling, Heinz – Intelligence, 2008
The objective of this study is to explore the potential for developing IRT-based working memory scales for assessing specific working memory components in children (8-13 years). These working memory scales should measure cognitive abilities reliably in the upper range of ability distribution as well as in the normal range, and provide a…
Descriptors: Test Items, Academic Achievement, Factor Structure, Factor Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Kathryn S.; Osborne, Randall E.; Hayes, Keith A.; Simoes, Richard A. – Journal of Educational Computing Research, 2008
Minimal research has been conducted contrasting the effectiveness of various testing accommodations for college students diagnosed with ADHD. The current assumption is that these students are best served by extending the time they have to take a test. It is the supposition of these investigators that paced item presentation may be a more…
Descriptors: College Students, Testing Accommodations, Student Attitudes, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Vigneau, Francois; Bors, Douglas A. – Intelligence, 2008
Various taxonomies of Raven's Advanced Progressive Matrices (APM) items have been proposed in the literature to account for performance on the test. In the present article, three such taxonomies based on information processing, namely Carpenter, Just and Shell's [Carpenter, P.A., Just, M.A., & Shell, P., (1990). What one intelligence test…
Descriptors: Intelligence, Intelligence Tests, Factor Analysis, Classification
Peer reviewed Peer reviewed
Direct linkDirect link
El-Alfy, El-Sayed M.; Abdel-Aal, Radwan E. – Computers & Education, 2008
Recent advances in educational technologies and the wide-spread use of computers in schools have fueled innovations in test construction and analysis. As the measurement accuracy of a test depends on the quality of the items it includes, item selection procedures play a central role in this process. Mathematical programming and the item response…
Descriptors: Test Items, Item Analysis, Educational Technology, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Cohen, Jon; Chan, Tsze; Jiang, Tao; Seburn, Mary – Applied Psychological Measurement, 2008
U.S. state educational testing programs administer tests to track student progress and hold schools accountable for educational outcomes. Methods from item response theory, especially Rasch models, are usually used to equate different forms of a test. The most popular method for estimating Rasch models yields inconsistent estimates and relies on…
Descriptors: Testing Programs, Educational Testing, Item Response Theory, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Henson, Robert; Roussos, Louis; Douglas, Jeff; He, Xuming – Applied Psychological Measurement, 2008
Cognitive diagnostic models (CDMs) model the probability of correctly answering an item as a function of an examinee's attribute mastery pattern. Because estimation of the mastery pattern involves more than a continuous measure of ability, reliability concepts introduced by classical test theory and item response theory do not apply. The cognitive…
Descriptors: Diagnostic Tests, Classification, Probability, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Lamprianou, Iasonas – International Journal of Testing, 2008
This study investigates the effect of reporting the unadjusted raw scores in a high-stakes language exam when raters differ significantly in severity and self-selected questions differ significantly in difficulty. More sophisticated models, introducing meaningful facets and parameters, are successively used to investigate the characteristics of…
Descriptors: High Stakes Tests, Raw Scores, Item Response Theory, Language Tests
Kim, Seock-Ho; Cohen, Allan S.; DiStefano, Christine A.; Kim, Sooyeon – 1998
Type I error rates of the likelihood ratio test for the detection of differential item functioning (DIF) in the partial credit model were investigated using simulated data. The partial credit model with four ordered performance levels was used to generate data sets of a 30-item test for samples of 300 and 1,000 simulated examinees. Three different…
Descriptors: Item Bias, Simulation, Test Items
Raju, Nambury S.; Arenson, Ethan – 2002
An alternative method of finding a common metric for separate calibrations through the use of a common (anchor) set of items is presented. Based on Raju's (1988) method of calculating the area between the two item response functions, this (area-minimization) method minimizes the sum of the squared exact unsigned areas of each of the common items.…
Descriptors: Item Response Theory, Test Items
Deng, Hui; Ansley, Timothy N. – 2000
This study provided preliminary results about the performance of the DIMTEST statistical procedure for detecting multidimensionality with data simulated from both compensatory and noncompensatory models under a latent structure where all items in a test were influenced by the same two abilities. For the first case, data were simulated to reflect…
Descriptors: Simulation, Test Construction, Test Items
Peer reviewed Peer reviewed
Christensen, Karl Bang; Bjorner, Jakob Bue; Kriner, Svend; Petersen, Jorgen Holm – Psychometrika, 2002
Considers tests of unidimensionality in polytomous Rasch models against a specified alternative, given by a partition of the items into subgroups, that are believed to measure different dimensions of the latent construct. Uses data from an occupational health study to motivate and illustrate the methods. (SLD)
Descriptors: Item Response Theory, Test Items
Peer reviewed Peer reviewed
Little, Todd D.; Cunningham, William A.; Shahar, Golan; Widaman, Keith F. – Structural Equation Modeling, 2002
Studied the evidence for the practice of using parcels of item as manifest variables in structural equation modeling procedures. Findings suggest that the unconsidered use of parcels is never warranted, but the considered use of parcels cannot be dismissed out of hand. Describes a number of parceling techniques and their strengths and weaknesses.…
Descriptors: Structural Equation Models, Test Items
Peer reviewed Peer reviewed
Ip, Edward Hak-Sing – Psychometrika, 2000
Provides a general method that adjusts for the inflation of information associated with a test containing item clusters and a computational scheme for the evaluation of the factors of adjustment for clusters in the restrictive and general cases. Illustrates the approach with an analysis of National Assessment of Educational Progress data. (SLD)
Descriptors: Cluster Analysis, Correlation, Test Items
Pages: 1  |  ...  |  306  |  307  |  308  |  309  |  310  |  311  |  312  |  313  |  314  |  ...  |  637