Publication Date
| In 2026 | 0 |
| Since 2025 | 220 |
| Since 2022 (last 5 years) | 1089 |
| Since 2017 (last 10 years) | 2599 |
| Since 2007 (last 20 years) | 4960 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 653 |
| Teachers | 563 |
| Researchers | 250 |
| Students | 201 |
| Administrators | 81 |
| Policymakers | 22 |
| Parents | 17 |
| Counselors | 8 |
| Community | 7 |
| Support Staff | 3 |
| Media Staff | 1 |
| More ▼ | |
Location
| Turkey | 226 |
| Canada | 223 |
| Australia | 155 |
| Germany | 116 |
| United States | 99 |
| China | 90 |
| Florida | 86 |
| Indonesia | 82 |
| Taiwan | 78 |
| United Kingdom | 73 |
| California | 66 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 4 |
| Meets WWC Standards with or without Reservations | 4 |
| Does not meet standards | 1 |
Frisbie, David A. – 1981
The relative difficulty ratio (RDR) is used as a method of representing test difficulty. The RDR is the ratio of a test mean to the ideal mean, the point midway between the perfect score and the mean chance score for the test. The RDR tranformation is a linear scale conversion method but not a linear equating method in the classical sense. The…
Descriptors: Comparative Testing, Difficulty Level, Evaluation Methods, Raw Scores
Gordon, Belita – 1981
For use by those preparing statewide competency testing programs and those developing basic skills examinations at the postsecondary level, this paper provides guidelines for writing items for tests of basic reading skills. Following a discussion of how to choose appropriate items, how to word items, and how to select appropriate reading passages,…
Descriptors: Basic Skills, Higher Education, Minimum Competency Testing, Reading Tests
Diamond, Esther E. – 1981
As test standards and research literature in general indicate, definitions of test bias and item bias vary considerably, as do the results of existing methods of identifying biased items. The situation is further complicated by issues of content, context, construct, and criterion. In achievement tests, for example, content validity may impose…
Descriptors: Achievement Tests, Aptitude Tests, Psychometrics, Test Bias
Townsend, Michael A. R.; Mahoney, Peggy – 1980
The roles of humor and anxiety in test performance were investigated. Measures of trait anxiety, state anxiety and achievement were obtained on a sample of undergraduate students; the A-Trait and A-State scales of the State-Trait Anxiety Inventory were used. Half of the students received additional humorous items in the achievement test. The…
Descriptors: Achievement Tests, Anxiety, Higher Education, Humor
Peer reviewedRogers, Paul W. – Educational and Psychological Measurement, 1978
Two procedures for the display of item analysis statistics are described. One procedure allows for investigation of difficulty; the second plots item difficulty against item discrimination. (Author/JKS)
Descriptors: Difficulty Level, Graphs, Guidelines, Item Analysis
Peer reviewedStrenski, Ellen – College English, 1979
Sample sentences in grammar books and in tests too frequently deal with cynicism and despair. (DD)
Descriptors: Grammar, Higher Education, Models, Negative Attitudes
Los Arcos, J. M.; Vano, E. – Educational Technology, 1978
Describes a computer-managed instructional system used to formulate, print, and evaluate true-false questions for testing purposes. The design of the system and its application in medical and nuclear engineering courses in two Spanish institutions of higher learning are detailed. (RAO)
Descriptors: Computer Assisted Testing, Computer Managed Instruction, Diagrams, Higher Education
Peer reviewedHartke, Alan R. – Journal of Educational Measurement, 1978
Latent partition analysis is shown to be useful in determining the conceptual homogeneity of an item population. Such item populations are useful for mastery testing. Applications of latent partition analysis in assessing content validity are suggested. (Author/JKS)
Descriptors: Higher Education, Item Analysis, Item Sampling, Mastery Tests
Peer reviewedDinero, Thomas E.; Haertel, Edward – Applied Psychological Measurement, 1977
This research simulated responses of 75 subjects to 30 items under the Birnbaum model and attempted a fit to the data using the Rasch model. When item discriminations varied from a variance of .05 to .25, there was only a slight increase in lack of fit as the variances increased. (Author/CTM)
Descriptors: Goodness of Fit, Item Analysis, Latent Trait Theory, Mathematical Models
Peer reviewedCrompton, John – English in Education, 1977
Based on a survey of examinations in drama, discusses principles, practices, and issues raised by the findings. (AA)
Descriptors: Drama, Educational Objectives, English Instruction, Foreign Countries
Peer reviewedYen, Wendy M. – Psychometrika, 1987
Comparisons are made between BILOG version 2.2 and LOGIST 5.0 version 2.5 in estimating the item parameters, traits, item characteristic functions, and test characteristic functions for the three-parameter logistic model. Speed and accuracy are reported for a number of 10, 20, and 40-item tests. (Author/GDC)
Descriptors: Comparative Analysis, Computer Simulation, Computer Software, Item Analysis
Peer reviewedRasmussen, Jeffrey Lee – Multivariate Behavioral Research, 1988
A Monte Carlo simulation was used to compare the Mahalanobis "D" Squared and the Comrey "Dk" methods of detecting outliers in data sets. Under the conditions investigated, the "D" Squared technique was preferable as an outlier removal statistic. (SLD)
Descriptors: Comparative Analysis, Computer Simulation, Data Analysis, Monte Carlo Methods
Peer reviewedSkaggs, Gary; Lissitz, Robert W. – Applied Psychological Measurement, 1988
Item response theory equating invariance was examined by simulating vertical equating of two sets of examinee ability data comparing Rasch, three-parameter, and equipercentile equating methods. All three were reasonably invariant, suggesting that multidimensionality is likely to be the cause of lack of invariance found in real data sets. (SLD)
Descriptors: Ability, Elementary Secondary Education, Equated Scores, Latent Trait Theory
Peer reviewedThissen, David; Steinberg, Lynne – Psychometrika, 1986
This article organizes models for categorical item response data into three distinct classes. "Difference models" are appropriate for ordered responses, "divide-by-total" models for either ordered or nominal responses, and "left-side added" models for multiple-choice responses with guessing. Details of the taxonomy…
Descriptors: Classification, Item Analysis, Latent Trait Theory, Mathematical Models
Peer reviewedFoxman, Derek; And Others – Mathematics in School, 1984
Presented are examples of problem-solving items from practical and written mathematics tests. These tests are part of an English survey designed to assess the mathematics achievement of students aged 11 and 15. (JN)
Descriptors: Elementary Secondary Education, Mathematics Achievement, Mathematics Education, Mathematics Instruction


