Publication Date
| In 2026 | 0 |
| Since 2025 | 74 |
| Since 2022 (last 5 years) | 509 |
| Since 2017 (last 10 years) | 1084 |
| Since 2007 (last 20 years) | 2603 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Researchers | 169 |
| Practitioners | 49 |
| Teachers | 32 |
| Administrators | 8 |
| Policymakers | 8 |
| Counselors | 4 |
| Students | 4 |
| Media Staff | 1 |
Location
| Turkey | 173 |
| Australia | 81 |
| Canada | 79 |
| China | 72 |
| United States | 56 |
| Taiwan | 44 |
| Germany | 43 |
| Japan | 41 |
| United Kingdom | 39 |
| Iran | 37 |
| Indonesia | 35 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 1 |
| Meets WWC Standards with or without Reservations | 1 |
| Does not meet standards | 1 |
Peer reviewedCallender, John C.; Osburn, H. G. – Educational and Psychological Measurement, 1977
A FORTRAN program for maximizing and cross-validating split-half reliability coefficients is described. Externally computed arrays of item means and covariances are used as input for each of two samples. The user may select a number of subsets from the complete set of items for analysis in a single run. (Author/JKS)
Descriptors: Computer Programs, Item Analysis, Test Reliability, Test Validity
Peer reviewedCooper, Merri-Ann; Fiske, Donald W. – Educational and Psychological Measurement, 1976
Construct validity patterns of test-criteria and item-criteria correlations are shown to be inconsistent across samples. The results of an investigation of construct validity patterns on two published personality scales is presented. (JKS)
Descriptors: Correlation, Item Analysis, Personality Measures, Reliability
French, Christine L. – 2001
Item analysis is a very important consideration in the test development process. It is a statistical procedure to analyze test items that combines methods used to evaluate the important characteristics of test items, such as difficulty, discrimination, and distractibility of the items in a test. This paper reviews some of the classical methods for…
Descriptors: Item Analysis, Item Response Theory, Selection, Test Items
Peer reviewedClopton, James R. – Journal of Educational and Psychological Measurement, 1974
Descriptors: Comparative Analysis, Computer Programs, Hypothesis Testing, Item Analysis
Peer reviewedBohrnstedt, George W.; Campbell, Richard T. – Educational and Psychological Measurement, 1972
Descriptors: Computer Programs, Data Analysis, Item Analysis, Rating Scales
Peer reviewedWhitney, Douglas R.; Sabers, Darrell L. – Journal of Experimental Education, 1971
Descriptors: Discriminant Analysis, Essay Tests, Item Analysis, Statistical Analysis
Gunn, Robert L.; Pearman, H. Egar – J Clin Psychol, 1970
A schedule was developed for assessing the future outlook of hospitalized psychiatric patients and administered to samples of patients from two different hospitals. A factor analysis was done for each sample. (CK)
Descriptors: Attitudes, Factor Analysis, Item Analysis, Patients
Simon, George B. – J Educ Meas, 1969
Descriptors: Item Analysis, Measurement Instruments, Test Construction, Test Results
Hunt, Richard A. – Educ Psychol Meas, 1970
Descriptors: Computer Programs, Item Analysis, Psychological Evaluation, Rating Scales
Koppel, Mark A.; Sechrest, Lee – Educ Psychol Meas, 1970
Descriptors: Correlation, Experimental Groups, Humor, Intelligence
Peer reviewedFrisbie, David A. – Educational and Psychological Measurement, 1981
The Relative Difficulty Ratio (RDR) was developed as an index of test or item difficulty for use when raw score means or item p-values are not directly comparable because of chance score differences. Computational RDR are described. Applications of the RDR at both the test and item level are illustrated. (Author/BW)
Descriptors: Difficulty Level, Item Analysis, Mathematical Formulas, Test Items
Peer reviewedJackson, Paul H. – Psychometrika, 1979
Use of the same term "split-half" for division of an n-item test into two subtests containing equal (Cronbach), and possibly unequal (Guttman), numbers of items sometimes leads to a misunderstanding about the relation between Guttman's maximum split-half bound and Cronbach's coefficient alpha. This distinction is clarified. (Author/JKS)
Descriptors: Item Analysis, Mathematical Formulas, Technical Reports, Test Reliability
Peer reviewedHills, John R. – Educational Measurement: Issues and Practice, 1989
Test bias detection methods based on item response theory (IRT) are reviewed. Five such methods are commonly used: (1) equality of item parameters; (2) area between item characteristic curves; (3) sums of squares; (4) pseudo-IRT; and (5) one-parameter-IRT. A table compares these and six newer or less tested methods. (SLD)
Descriptors: Item Analysis, Test Bias, Test Items, Testing Programs
Peer reviewedBurton, Richard F. – Assessment & Evaluation in Higher Education, 2001
Item-discrimination indices are numbers calculated from test data that are used in assessing the effectiveness of individual test questions. This article asserts that the indices are so unreliable as to suggest that countless good questions may have been discarded over the years. It considers how the indices, and hence overall test reliability,…
Descriptors: Guessing (Tests), Item Analysis, Test Reliability, Testing Problems
van der Linden, Wim J. – Journal of Educational Measurement, 2005
In test assembly, a fundamental difference exists between algorithms that select a test sequentially or simultaneously. Sequential assembly allows us to optimize an objective function at the examinee's ability estimate, such as the test information function in computerized adaptive testing. But it leads to the non-trivial problem of how to realize…
Descriptors: Law Schools, Item Analysis, Admission (School), Adaptive Testing

Direct link
