NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 4,996 to 5,010 of 9,547 results Save | Export
Peer reviewed Peer reviewed
Butter, Rene; De Boeck, Paul – Psychometrika, 1998
An item response theory model based on the Rasch model is proposed for composite tasks, those decomposed into subtasks of different kinds. The model, which is illustrated with an application to spelling tasks, constrains the difficulties of the composite tasks to be linear combinations of the difficulties of the subtask items. (SLD)
Descriptors: Difficulty Level, Item Response Theory, Mathematical Models, Spelling
Peer reviewed Peer reviewed
Roussos, Louis A.; Stout, William F.; Marden, John I. – Journal of Educational Measurement, 1998
Introduces a new approach for partitioning test items into dimensionally distinct item clusters. The core of this approach is a new item-pair conditional-covariance-based proximity measure that can be used with hierarchical cluster analysis. The procedure can correctly classify, on average, over 90% of the items for correlations as high as 0.9.…
Descriptors: Cluster Analysis, Cluster Grouping, Correlation, Multidimensional Scaling
Peer reviewed Peer reviewed
Wood, William C. – Journal of Education for Business, 1998
Describes the technique of linked multiple choice, a hybrid of open-ended and multiple-choice formats. Explains how it combines the testing power of free-response questions with the efficient grading of multiple choice. (SK)
Descriptors: Grading, Multiple Choice Tests, Student Evaluation, Test Items
Peer reviewed Peer reviewed
Karabatsos, George – Journal of Outcome Measurement, 1998
A Rasch method is proposed to measure variables of nonadditive conjoint structures, where dichotomous response conditions are evaluated. In this framework, both the number of endorsed items and their latent positions are considered. The four steps of the method are explained and illustrated with simulated person responses. (SLD)
Descriptors: Item Response Theory, Probability, Research Methodology, Responses
Peer reviewed Peer reviewed
Velozzo, Craig A.; Lai, Jin-Shei; Mallinson, Trudy; Hauselman, Ellyn – Journal of Outcome Measurement, 2001
Studied how Rasch analysis could be used to reduce the number of items in an instrument while maintaining credible psychometric properties. Applied the approach to the Visual Function-14 developed to measure the need for and outcomes of cataract surgery. Results show how Rasch analysis can be useful in designing modifications of instruments. (SLD)
Descriptors: Item Response Theory, Psychometrics, Test Construction, Test Items
Peer reviewed Peer reviewed
Tate, Richard – Journal of Educational Measurement, 2000
Studied the error associated with a proposed linking method for tests consisting of both constructed response and multiple choice items through a simulation study varying several factors. Results support the use of the proposed linking method. Also illustrated possible linking bias resulting from use of the traditional linking method and the use…
Descriptors: Constructed Response, Equated Scores, Multiple Choice Tests, Simulation
Peer reviewed Peer reviewed
Meijer, Rob R. – Applied Psychological Measurement, 1995
A statistic used by R. Meijer (1994) to determine person-fit referred to the number of errors from the deterministic Guttman model (L. Guttman, 1950), but this was, in fact, based on the number of errors from the deterministic Guttman model as defined by J. Loevinger (1947, 1948). (SLD)
Descriptors: Difficulty Level, Models, Responses, Scaling
Peer reviewed Peer reviewed
Raju, Nambury S.; And Others – Applied Psychological Measurement, 1995
Internal measures of differential functioning of items and tests (DFIT) based on item response theory (IRT) are proposed. The new differential test functioning index leads to noncompensatory DIF indices. Monte Carlo studies demonstrate that these indices are accurate in assessing DIF. (SLD)
Descriptors: Item Response Theory, Monte Carlo Methods, Test Bias, Test Items
Peer reviewed Peer reviewed
Wainer, Howard; Wang, Xiaohui – Journal of Educational Measurement, 2000
Modified the three-parameter model to include an additional random effect for items nested within the same testlet. Fitted the new model to 86 testlets from the Test of English as a Foreign Language (TOEFL) and compared standard parameters (discrimination, difficulty, and guessing) with those obtained through traditional modeling. Discusses the…
Descriptors: English (Second Language), Language Tests, Scoring, Statistical Analysis
Peer reviewed Peer reviewed
Veerkamp, Wim J. J. – Journal of Educational and Behavioral Statistics, 2000
Showed how Taylor approximation can be used to generate a linear approximation to a logistic item characteristic curve and a linear ability estimator. Demonstrated how, for a specific simulation, this could result in the special case of a Robbins-Monro item selection procedure for adaptive testing. (SLD)
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Selection
Peer reviewed Peer reviewed
Frary, Robert B. – Applied Measurement in Education, 2000
Characterizes the circumstances under which validity changes may occur as a result of the deletion of a predictor test segment. Equations show that, for a positive outcome, one should seek a relatively large correlation between the scores from the deleted segment and the remaining items, with a relatively low correlation between scores from the…
Descriptors: Equations (Mathematics), Prediction, Reliability, Scores
Peer reviewed Peer reviewed
Oakland, Thomas; Poortinga, Ype H.; Schlegel, Justin; Hambleton, Ronald K. – International Journal of Testing, 2001
Traces the history of the International Test Commission (ITC), reviewing the context in which it was formed, its goals, and major milestones in its development. Suggests ways the ITC may continue to impact test development positively, and introduces this inaugural journal issue. (SLD)
Descriptors: Educational History, Educational Testing, International Education, Test Construction
Peer reviewed Peer reviewed
Campbell, David – Journal of Career Assessment, 2002
This overview of the development of the Campbell Interest and Skill Survey includes a series of questions answered in the construction process related to domains assessed, item content, response format, scale construction, length, item bias, scoring scales, and interpretation. The addition of skill items to the interest items is described. (SK)
Descriptors: Job Skills, Measures (Individuals), Test Construction, Test Items
Peer reviewed Peer reviewed
Bernaards A., Coen; Sijtsma, Klaas – Multivariate Behavioral Research, 1999
Used simulation to study the problem of missing item responses in tests and questionnaires when factor analysis is used to study the structure of the items. Factor loadings based on the EM algorithm best approximated the loading structure, with imputation of the mean per person across the scores for that person being the best alternative. (SLD)
Descriptors: Factor Analysis, Factor Structure, Item Response Theory, Simulation
Peer reviewed Peer reviewed
Maydeu-Olivares, Albert; Morera, Osvaldo; D'Zurilla, Thomas J. – Multivariate Behavioral Research, 1999
Using item response theory, discusses the difficulties faced in evaluating measurement invariance when a psychological construct is assessed through a test or inventory composed of categorical items. Illustrates the usefulness of fitplots in assessing measurement invariance in inventory data. (SLD)
Descriptors: Classification, Item Response Theory, Psychological Testing, Test Interpretation
Pages: 1  |  ...  |  330  |  331  |  332  |  333  |  334  |  335  |  336  |  337  |  338  |  ...  |  637