NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 2,476 to 2,490 of 5,169 results Save | Export
Peer reviewed Peer reviewed
Callender, John C.; Osburn, H. G. – Educational and Psychological Measurement, 1977
A FORTRAN program for maximizing and cross-validating split-half reliability coefficients is described. Externally computed arrays of item means and covariances are used as input for each of two samples. The user may select a number of subsets from the complete set of items for analysis in a single run. (Author/JKS)
Descriptors: Computer Programs, Item Analysis, Test Reliability, Test Validity
Peer reviewed Peer reviewed
Cooper, Merri-Ann; Fiske, Donald W. – Educational and Psychological Measurement, 1976
Construct validity patterns of test-criteria and item-criteria correlations are shown to be inconsistent across samples. The results of an investigation of construct validity patterns on two published personality scales is presented. (JKS)
Descriptors: Correlation, Item Analysis, Personality Measures, Reliability
French, Christine L. – 2001
Item analysis is a very important consideration in the test development process. It is a statistical procedure to analyze test items that combines methods used to evaluate the important characteristics of test items, such as difficulty, discrimination, and distractibility of the items in a test. This paper reviews some of the classical methods for…
Descriptors: Item Analysis, Item Response Theory, Selection, Test Items
Peer reviewed Peer reviewed
Clopton, James R. – Journal of Educational and Psychological Measurement, 1974
Descriptors: Comparative Analysis, Computer Programs, Hypothesis Testing, Item Analysis
Peer reviewed Peer reviewed
Bohrnstedt, George W.; Campbell, Richard T. – Educational and Psychological Measurement, 1972
Descriptors: Computer Programs, Data Analysis, Item Analysis, Rating Scales
Peer reviewed Peer reviewed
Whitney, Douglas R.; Sabers, Darrell L. – Journal of Experimental Education, 1971
Descriptors: Discriminant Analysis, Essay Tests, Item Analysis, Statistical Analysis
Gunn, Robert L.; Pearman, H. Egar – J Clin Psychol, 1970
A schedule was developed for assessing the future outlook of hospitalized psychiatric patients and administered to samples of patients from two different hospitals. A factor analysis was done for each sample. (CK)
Descriptors: Attitudes, Factor Analysis, Item Analysis, Patients
Simon, George B. – J Educ Meas, 1969
Descriptors: Item Analysis, Measurement Instruments, Test Construction, Test Results
Hunt, Richard A. – Educ Psychol Meas, 1970
Descriptors: Computer Programs, Item Analysis, Psychological Evaluation, Rating Scales
Koppel, Mark A.; Sechrest, Lee – Educ Psychol Meas, 1970
Descriptors: Correlation, Experimental Groups, Humor, Intelligence
Peer reviewed Peer reviewed
Frisbie, David A. – Educational and Psychological Measurement, 1981
The Relative Difficulty Ratio (RDR) was developed as an index of test or item difficulty for use when raw score means or item p-values are not directly comparable because of chance score differences. Computational RDR are described. Applications of the RDR at both the test and item level are illustrated. (Author/BW)
Descriptors: Difficulty Level, Item Analysis, Mathematical Formulas, Test Items
Peer reviewed Peer reviewed
Jackson, Paul H. – Psychometrika, 1979
Use of the same term "split-half" for division of an n-item test into two subtests containing equal (Cronbach), and possibly unequal (Guttman), numbers of items sometimes leads to a misunderstanding about the relation between Guttman's maximum split-half bound and Cronbach's coefficient alpha. This distinction is clarified. (Author/JKS)
Descriptors: Item Analysis, Mathematical Formulas, Technical Reports, Test Reliability
Peer reviewed Peer reviewed
Hills, John R. – Educational Measurement: Issues and Practice, 1989
Test bias detection methods based on item response theory (IRT) are reviewed. Five such methods are commonly used: (1) equality of item parameters; (2) area between item characteristic curves; (3) sums of squares; (4) pseudo-IRT; and (5) one-parameter-IRT. A table compares these and six newer or less tested methods. (SLD)
Descriptors: Item Analysis, Test Bias, Test Items, Testing Programs
Peer reviewed Peer reviewed
Burton, Richard F. – Assessment & Evaluation in Higher Education, 2001
Item-discrimination indices are numbers calculated from test data that are used in assessing the effectiveness of individual test questions. This article asserts that the indices are so unreliable as to suggest that countless good questions may have been discarded over the years. It considers how the indices, and hence overall test reliability,…
Descriptors: Guessing (Tests), Item Analysis, Test Reliability, Testing Problems
Peer reviewed Peer reviewed
Direct linkDirect link
van der Linden, Wim J. – Journal of Educational Measurement, 2005
In test assembly, a fundamental difference exists between algorithms that select a test sequentially or simultaneously. Sequential assembly allows us to optimize an objective function at the examinee's ability estimate, such as the test information function in computerized adaptive testing. But it leads to the non-trivial problem of how to realize…
Descriptors: Law Schools, Item Analysis, Admission (School), Adaptive Testing
Pages: 1  |  ...  |  162  |  163  |  164  |  165  |  166  |  167  |  168  |  169  |  170  |  ...  |  345