Publication Date
| In 2026 | 0 |
| Since 2025 | 200 |
| Since 2022 (last 5 years) | 1070 |
| Since 2017 (last 10 years) | 2580 |
| Since 2007 (last 20 years) | 4941 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 653 |
| Teachers | 563 |
| Researchers | 250 |
| Students | 201 |
| Administrators | 81 |
| Policymakers | 22 |
| Parents | 17 |
| Counselors | 8 |
| Community | 7 |
| Support Staff | 3 |
| Media Staff | 1 |
| More ▼ | |
Location
| Turkey | 225 |
| Canada | 223 |
| Australia | 155 |
| Germany | 116 |
| United States | 99 |
| China | 90 |
| Florida | 86 |
| Indonesia | 82 |
| Taiwan | 78 |
| United Kingdom | 73 |
| California | 65 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 4 |
| Meets WWC Standards with or without Reservations | 4 |
| Does not meet standards | 1 |
Peer reviewedMacDonald, Paul; Paunonen, Sampo V. – Educational and Psychological Measurement, 2002
Examined the behavior of item and person statistics from item response theory and classical test theory frameworks through Monte Carlo methods with simulated test data. Findings suggest that item difficulty and person ability estimates are highly comparable for both approaches. (SLD)
Descriptors: Ability, Comparative Analysis, Difficulty Level, Item Response Theory
Peer reviewedHillocks, George, Jr. – English Journal, 2003
Suggests that analyses of current assessment practices need to examine the impact that testing has on teaching and the curriculum. Notes that writing assessment drives instruction. Provides basic questions to begin analyses of local and state assessments, and provides one such analysis of Illinois' assessment. Concludes that educators need to help…
Descriptors: Accountability, Critical Thinking, Secondary Education, State Standards
Peer reviewedThissen, David; And Others – Journal of Educational Measurement, 1989
An item response model for multiple-choice items is described and illustrated in item analysis. The model provides parametric and graphical summaries of the performance of each alternative associated with a multiple-choice item. The illustrative application of the model involves a pilot test of mathematics achievement items. (TJH)
Descriptors: Distractors (Tests), Latent Trait Theory, Mathematical Models, Mathematics Tests
Peer reviewedTippets, Elizabeth; Benson, Jeri – Applied Measurement in Education, 1989
The effect of 3 item arrangements (easy to hard, hard to easy, and random) on test anxiety was studied using an actual classroom examination administered to 126 graduate students (36 males and 90 females) under power conditions. Results indicate that anxiety level and test item arrangement are related. (TJH)
Descriptors: Achievement Tests, Difficulty Level, Graduate Students, Higher Education
Peer reviewedBuchanan, Richard W.; Rogers, Martha – College Teaching, 1990
Some solutions are offered for three large-class testing problems: how to offer students an opportunity to be assessed in an essay format without straining the available grading resources; deal with students who miss a required examination; and generate large numbers of new, relevant examination questions regularly. (MSE)
Descriptors: Class Size, College Instruction, Essays, Higher Education
Peer reviewedAckerman, Terry A. – Applied Psychological Measurement, 1989
The characteristics of unidimensional ability estimates obtained from data generated using multidimensional compensatory models were compared with estimates from non-compensatory item response theory (IRT) models. The least squares matching procedures used represent a good method of matching the two multidimensional IRT models. (TJH)
Descriptors: Ability Identification, Computer Software, Difficulty Level, Estimation (Mathematics)
Peer reviewedReckase, Mark D. – Educational Measurement: Issues and Practice, 1989
Requirements for adaptive testing are reviewed, and the reasons implementation has taken so long are explored. The adaptive test is illustrated through the Stanford-Binet Intelligence Scale of L. M. Terman and M. A. Merrill (1960). Current adaptive testing is tied to the development of item response theory. (SLD)
Descriptors: Adaptive Testing, Educational Development, Elementary Secondary Education, Latent Trait Theory
Peer reviewedRosenbaum, Paul R. – Psychometrika, 1988
Two theorems of unidimensional item response theory are extended to describe observable item response distributions when there is conditional independence between but not necessarily within item bundles. An item bundle is a small group of multiple-choice items sharing a common reading passage or a group of items sharing distractors. (SLD)
Descriptors: Equations (Mathematics), Item Analysis, Latent Trait Theory, Multiple Choice Tests
Peer reviewedKingsbury, G. Gage; Zara, Anthony R. – Applied Measurement in Education, 1991
This simulation investigated two procedures that reduce differences between paper-and-pencil testing and computerized adaptive testing (CAT) by making CAT content sensitive. Results indicate that the price in terms of additional test items of using constrained CAT for content balancing is much smaller than that of using testlets. (SLD)
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Computer Simulation
Peer reviewedDe Ayala, R. J. – Applied Psychological Measurement, 1994
Previous work on the effects of dimensionality on parameter estimation for dichotomous models is extended to the graded response model. Datasets are generated that differ in the number of latent factors as well as their interdimensional association, number of test items, and sample size. (SLD)
Descriptors: Estimation (Mathematics), Item Response Theory, Maximum Likelihood Statistics, Sample Size
Peer reviewedDonoghue, John R. – Journal of Educational Measurement, 1994
Using the generalized partial-credit item response theory (IRT) model, polytomous items from the 1991 field test of the National Assessment of Educational Progress reading test were calibrated with multiple-choice and open-ended items. Polytomous items provide more information than dichotomous items. (SLD)
Descriptors: Equations (Mathematics), Field Tests, Item Response Theory, Multiple Choice Tests
Peer reviewedFoos, Paul W.; And Others – Journal of Educational Psychology, 1994
In two experiments involving 260 college students, the generation effect, which occurs when individuals remember materials they have generated better than materials generated by others, was studied. Results support the generation effect and indicate that it occurs in a natural setting but only for test items targeted by generating students. (SLD)
Descriptors: Academic Achievement, College Students, Higher Education, Recall (Psychology)
Peer reviewedGoldwater, Paul; Fogarty, Timothy – Journal of Education for Business, 1995
An expert system administered study questions from the Certified Public Accountant (CPA) and Certified Management Accountant (CMA) exams and others designed for textbooks to 113 accounting students. CPA/CMA questions were more difficult (71% correct compared to 74% for others); CMA questions were more challenging than CPA ones (67% to 73%…
Descriptors: Accounting, Certification, Difficulty Level, Expert Systems
Peer reviewedLivingston, Samuel A.; Lewis, Charles – Journal of Educational Measurement, 1995
A method is presented for estimating the accuracy and consistency of classifications based on test scores. The reliability of the score is used to estimate effective test length in terms of discrete items. The true-score distribution is estimated by fitting a four-parameter beta model. (SLD)
Descriptors: Classification, Estimation (Mathematics), Scores, Statistical Distributions
Peer reviewedQualls, Audrey L. – Applied Measurement in Education, 1995
Classically parallel, tau-equivalently parallel, and congenerically parallel models representing various degrees of part-test parallelism and their appropriateness for tests composed of multiple item formats are discussed. An appropriate reliability estimate for a test with multiple item formats is presented and illustrated. (SLD)
Descriptors: Achievement Tests, Estimation (Mathematics), Measurement Techniques, Test Format


