Publication Date
| In 2026 | 0 |
| Since 2025 | 200 |
| Since 2022 (last 5 years) | 1070 |
| Since 2017 (last 10 years) | 2580 |
| Since 2007 (last 20 years) | 4941 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 653 |
| Teachers | 563 |
| Researchers | 250 |
| Students | 201 |
| Administrators | 81 |
| Policymakers | 22 |
| Parents | 17 |
| Counselors | 8 |
| Community | 7 |
| Support Staff | 3 |
| Media Staff | 1 |
| More ▼ | |
Location
| Turkey | 225 |
| Canada | 223 |
| Australia | 155 |
| Germany | 116 |
| United States | 99 |
| China | 90 |
| Florida | 86 |
| Indonesia | 82 |
| Taiwan | 78 |
| United Kingdom | 73 |
| California | 65 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 4 |
| Meets WWC Standards with or without Reservations | 4 |
| Does not meet standards | 1 |
Peer reviewedChang, Hua-Hua; Ying, Zhiliang – Applied Psychological Measurement, 1996
An item selection procedure for computerized adaptive testing based on average global information is proposed. Results from simulation studies comparing the approach with the usual maximum item information item selection indicate that the new method leads to improvement in terms of bias and mean squared error reduction under many circumstances.…
Descriptors: Adaptive Testing, Computer Assisted Testing, Error of Measurement, Item Response Theory
Peer reviewedJodoin, Michael G. – Journal of Educational Measurement, 2003
Analyzed examinee responses to conventional (multiple-choice) and innovative item formats in a computer-based testing program for item response theory (IRT) information with the three parameter and graded response models. Results for more than 3,000 adult examines for 2 tests show that the innovative item types in this study provided more…
Descriptors: Ability, Adults, Computer Assisted Testing, Item Response Theory
Barnette, J. Jackson – Research in the Schools, 2001
Studied the primacy effect (tendency to select items closer to the left side of the response scale) in Likert scales worded from "Strongly Disagree" to "Strongly Agree" and in the opposite direction. Findings for 386 high school and college students show no primacy effect, although negatively worded stems had an effect on Cronbach's alpha. (SLD)
Descriptors: College Students, High School Students, High Schools, Higher Education
Peer reviewedCarter, Ronald; Long, Michael N. – ELT Journal, 1990
Explores the nature of examination questions in literature in teaching English-as-a-Foreign-Language (EFL). Three examples of questioning that are said to be more language based and that are suggested as supplements to conventional tests are discussed. These include general comprehension, textual focus, and personal response. (GLR)
Descriptors: English (Second Language), Literature Appreciation, Questioning Techniques, Second Language Instruction
Peer reviewedAdema, Jos J. – Journal of Educational Measurement, 1990
Mixed integer linear programing models for customizing two-stage tests are presented. Model constraints are imposed with respect to test composition, administration time, inter-item dependencies, and other practical considerations. The models can be modified for use in the construction of multistage tests. (Author/TJH)
Descriptors: Adaptive Testing, Computer Assisted Testing, Equations (Mathematics), Linear Programing
Peer reviewedLong, Vena M.; And Others – Mathematics Teacher, 1989
Discussed are experiences in using the calculator to assess mathematical understanding on the Missouri Mastery and Achievement Tests (MMAT). Data from a calculator group and a no-calculator group at the eighth- and tenth-grade levels are reported. Several items showed differences between groups. (YP)
Descriptors: Achievement Tests, Calculators, Mathematics, Mathematics Achievement
Peer reviewedWillson, Victor L. – Journal of Educational Measurement, 1989
Performance on items in intelligence and achievement tests can be represented in terms of child development and information processes. Research is reviewed on item performance that supports developmental and information processing effects, particularly in children. Some suggestions regarding item development are made. (Author/TJH)
Descriptors: Achievement Tests, Child Development, Cognitive Processes, Early Childhood Education
Peer reviewedChalifour, Clark L.; Powers, Donald E. – Journal of Educational Measurement, 1989
Content characteristics of 1,400 Graduate Record Examination (GRE) analytical reasoning items were coded for item difficulty and discrimination. The results provide content characteristics for consideration in extending specifications for analytical reasoning items and a better understanding of the construct validity of these items. (TJH)
Descriptors: College Entrance Examinations, Construct Validity, Content Analysis, Difficulty Level
Peer reviewedWaks, S.; Barak, M. – Research in Science and Technological Education, 1988
Defines the Cognitive Difficulty Level (CDL) as number of schemes required for solution (NS) times the required learner's resources (Problem Solving Taxonomy [PST] level). Describes the validation procedures of the CDL index in high-school level electronics. (Author/YP)
Descriptors: Cognitive Ability, Content Analysis, Difficulty Level, Electronics
Peer reviewedBoldt, Robert F. – Language Testing, 1989
Attempts to identify latent variables affecting the item responses of the diverse language groups taking the Test of English As a Foreign Language indicated that latent group effects were small. Results support equating with item response theory and suggest the use of a restrictive assumption of proportionality of item response curves. (Author/CB)
Descriptors: English (Second Language), Item Response Theory, Language Proficiency, Language Tests
Peer reviewedIlai, Doron; Willerman, Lee – Intelligence, 1989
Items showing sex differences on the revised Wechsler Adult Intelligence Scale (WAIS-R) were studied. In a sample of 206 young adults (110 males and 96 females), 15 items demonstrated significant sex differences, but there was no relationship of item-specific gender content to sex differences in item performance. (SLD)
Descriptors: Comparative Testing, Females, Intelligence Tests, Item Analysis
Peer reviewedSciarone, A. G.; Schoorl, J. J. – Language Learning, 1989
Presents findings from an experiment that sought to determine the minimal number of blanks required to ensure parallelism in cloze tests, differing only in the point at which deletion starts. Results showed the required minimum depended on the scoring methods used, with exact-word tests requiring about 100 blanks and acceptable-word tests…
Descriptors: Cloze Procedure, Dutch, Indonesian, Reading Tests
Peer reviewedLiou, Michelle – Applied Psychological Measurement, 1988
In applying I. I. Bejar's method for detecting the dimensionality of achievement tests, researchers should be cautious in interpreting the slope of the principal axis. Other information from the data is needed in conjunction with Bejar's method of addressing item dimensionality. (SLD)
Descriptors: Achievement Tests, Computer Simulation, Difficulty Level, Equated Scores
Peer reviewedBaker, Frank B. – Applied Psychological Measurement, 1988
The form of item log-likelihood surface was investigated under two-parameter and three-parameter logistic models. Results confirm that the LOGIST program procedures used to locate the maximum of the likelihood functions are consistent with the form of the item log-likelihood surface. (SLD)
Descriptors: Estimation (Mathematics), Factor Analysis, Graphs, Latent Trait Theory
Peer reviewedWilcox, Rand R.; And Others – Journal of Educational Measurement, 1988
The second response conditional probability model of decision-making strategies used by examinees answering multiple choice test items was revised. Increasing the number of distractors or providing distractors giving examinees (N=106) the option to follow the model improved results and gave a good fit to data for 29 of 30 items. (SLD)
Descriptors: Cognitive Tests, Decision Making, Mathematical Models, Multiple Choice Tests


