NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 4,786 to 4,800 of 9,533 results Save | Export
French, Christine L. – 2001
Item analysis is a very important consideration in the test development process. It is a statistical procedure to analyze test items that combines methods used to evaluate the important characteristics of test items, such as difficulty, discrimination, and distractibility of the items in a test. This paper reviews some of the classical methods for…
Descriptors: Item Analysis, Item Response Theory, Selection, Test Items
Peer reviewed Peer reviewed
Frisbie, David A. – Educational and Psychological Measurement, 1981
The Relative Difficulty Ratio (RDR) was developed as an index of test or item difficulty for use when raw score means or item p-values are not directly comparable because of chance score differences. Computational RDR are described. Applications of the RDR at both the test and item level are illustrated. (Author/BW)
Descriptors: Difficulty Level, Item Analysis, Mathematical Formulas, Test Items
Peer reviewed Peer reviewed
Wilcox, Rand R. – Educational and Psychological Measurement, 1981
A formal framework is presented for determining which of the distractors of multiple-choice test items has a small probability of being chosen by a typical examinee. The framework is based on a procedure similar to an indifference zone formulation of a ranking and election problem. (Author/BW)
Descriptors: Mathematical Models, Multiple Choice Tests, Probability, Test Items
Peer reviewed Peer reviewed
Nishisato, Shizuhiko – Psychometrika, 1996
Issues related to dual scaling are explored, and it is illustrated that quantification theory still contains a number of missing links between its mathematics and the validity of applications. Normed versus projected weights, problems of multidimensional unfolding, and option standardization are among the questions requiring study. (SLD)
Descriptors: Equations (Mathematics), Research Methodology, Responses, Scaling
Peer reviewed Peer reviewed
Chang, Hua-Hua; And Others – Journal of Educational Measurement, 1996
An extension to the SIBTEST procedure of R. Shealy and W. Stout (1993) to detect differential item functioning (DIF) is proposed to handle polytomous items. Results of two simulations suggest that the modified SIBTEST performs reasonably well and sometimes can provide better control of impact-induced Type I error inflation. (SLD)
Descriptors: Comparative Analysis, Identification, Item Bias, Simulation
Peer reviewed Peer reviewed
Papanastasiou, Elena C. – Structural Equation Modeling, 2003
This volume, based on papers presented at a 1998 conference, collects thinking and research on item generation for test development. It includes materials on psychometric and cognitive theory, construct-oriented approaches to item generation, the item generation process, and some applications of item generative principles. (SLD)
Descriptors: Item Banks, Test Construction, Test Items, Test Theory
Peer reviewed Peer reviewed
Turner, Ronna C.; Carlson, Laurie – International Journal of Testing, 2003
Item-objective congruence as developed by R. Rovinelli and R. Hambleton is used in test development for evaluating content validity at the item development stage. Provides a mathematical extension to the Rovinelli and Hambleton index that is applicable for the multidimensional case. (SLD)
Descriptors: Content Validity, Test Construction, Test Content, Test Items
Peer reviewed Peer reviewed
Roberts, James S.; Laughlin, James E. – Applied Psychological Measurement, 1996
A parametric item response theory model for unfolding binary or graded responses is developed. The graded unfolding model (GUM) is a generalization of the hyperbolic cosine model for binary data of D. Andrich and G. Luo (1993). Applicability of the GUM to attitude testing is illustrated with real data. (SLD)
Descriptors: Attitude Measures, Item Response Theory, Responses, Test Items
Peer reviewed Peer reviewed
DeMars, Christine E. – Journal of Educational Measurement, 2003
Generated data to simulate multidimensionality resulting from including two or four subtopics on a test. DIMTEST analysis results suggest that including multiple topics, when they are commonly taught together, can lead to conceptual multidimensionality and mathematical multidimensionality. (SLD)
Descriptors: Curriculum, Simulation, Test Construction, Test Format
Peer reviewed Peer reviewed
Winglee, Marianne; Kalton, Graham; Rust, Keith; Kasprzyk, Daniel – Journal of Educational and Behavioral Statistics, 2001
Studied the handling of missing data in the U.S. component of the International Reading Literacy Study and compared theses approaches with other methods of handling missing data. For most analyses of the Reading Literacy Study, results show the data set completed by imputation to be a convenient option. (SLD)
Descriptors: Literacy, Reading Achievement, Research Methodology, Responses
Peer reviewed Peer reviewed
Rudner, Lawrence – Practical Assessment, Research & Evaluation, 1999
Discusses the advantages and disadvantages of using item banks while providing useful information to those who are considering implementing an item banking project in their school district. The primary advantage of item banking is in test development. Also describes start-up activities in implementing item banking. (SLD)
Descriptors: Item Banks, Program Implementation, Test Construction, Test Items
Peer reviewed Peer reviewed
Garner, Mary; Engelhard, George, Jr. – Journal of Applied Measurement, 2002
Describes a technique for obtaining item parameters of the Rasch model, a technique in which the item parameters are extracted from the eigenvectors of a matrix derived from comparisons between pairs of items. Describes advantages of this technique, which can be applied to both dichotomous and polytomous data. (SLD)
Descriptors: Estimation (Mathematics), Item Response Theory, Matrices, Test Items
Peer reviewed Peer reviewed
Hills, John R. – Educational Measurement: Issues and Practice, 1989
Test bias detection methods based on item response theory (IRT) are reviewed. Five such methods are commonly used: (1) equality of item parameters; (2) area between item characteristic curves; (3) sums of squares; (4) pseudo-IRT; and (5) one-parameter-IRT. A table compares these and six newer or less tested methods. (SLD)
Descriptors: Item Analysis, Test Bias, Test Items, Testing Programs
Peer reviewed Peer reviewed
Thorn, Deborah W.; Deitz, Jean C. – Occupational Therapy Journal of Research, 1990
A study to examine the content validity of the Test of Orientation for Rehabilitation Patients provided moderate to strong support for the retention of 46 of 49 test items and for grouping the items by domain of orientation. It exemplified a quantitative approach to the examination of content validity using content experts. (JOW)
Descriptors: Content Validity, Neurological Impairments, Patients, Rehabilitation
Peer reviewed Peer reviewed
Narayanan, Pankaja; Swaminathan, H. – Applied Psychological Measurement, 1994
Type I error rates for the Mantel-Haenszel (MH) statistic were within nominal limits when the MH procedure was compared with the simultaneous item bias (SIB) method for detecting differential item functioning. Results with data simulated for 1,296 conditions found Type I error rates slightly higher than expected for SIB. (SLD)
Descriptors: Comparative Analysis, Identification, Item Bias, Simulation
Pages: 1  |  ...  |  316  |  317  |  318  |  319  |  320  |  321  |  322  |  323  |  324  |  ...  |  636