Publication Date
| In 2026 | 0 |
| Since 2025 | 197 |
| Since 2022 (last 5 years) | 1067 |
| Since 2017 (last 10 years) | 2577 |
| Since 2007 (last 20 years) | 4938 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 653 |
| Teachers | 563 |
| Researchers | 250 |
| Students | 201 |
| Administrators | 81 |
| Policymakers | 22 |
| Parents | 17 |
| Counselors | 8 |
| Community | 7 |
| Support Staff | 3 |
| Media Staff | 1 |
| More ▼ | |
Location
| Turkey | 225 |
| Canada | 223 |
| Australia | 155 |
| Germany | 116 |
| United States | 99 |
| China | 90 |
| Florida | 86 |
| Indonesia | 82 |
| Taiwan | 78 |
| United Kingdom | 73 |
| California | 65 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 4 |
| Meets WWC Standards with or without Reservations | 4 |
| Does not meet standards | 1 |
Diones, Ruth; And Others – 1996
This study continued the research on analogy problem-solving on psychometric tests pursued by I. I. Bejar, R. Chaffin, and S. Embretson (1991). Characteristics of a semantic taxonomy and a cognitively and empirically motivated intensional/pragmatic (I/P) dichotomy were explored. There were two research questions: (1) Could the results of Bejar et…
Descriptors: Analogy, Cognitive Processes, College Entrance Examinations, Factor Analysis
Wainer, Howard; And Others – 1991
A series of computer simulations was run to measure the relationship between testlet validity and the factors of item pool size and testlet length for both adaptive and linearly constructed testlets. Results confirmed the generality of earlier empirical findings of H. Wainer and others (1991) that making a testlet adaptive yields only marginal…
Descriptors: Adaptive Testing, Computer Assisted Testing, Computer Simulation, Item Banks
Bezruczko, Nikolaus; And Others – 1989
The stability of bias estimates from J. Schueneman's chi-square method, the transformed Delta method, Rasch's one-parameter residual analysis, and the Mantel-Haenszel procedure, were compared across small and large samples for a data set of 30,000 cases. Bias values for 30 samples were estimated for each method, and means and variances of item…
Descriptors: Chi Square, Classification, Estimation (Mathematics), Identification
Nasser, Fadia; Takahashi, Tomone – 1996
The structure and the levels of test anxiety among Israeli-Arab high school students were examined using the Arabic version of I. G. Sarason's (1984) Reactions to Tests scale. The questionnaire was administered before a math examination to 226 female and 195 male students. The results of confirmatory factor analyses using eight item parcels…
Descriptors: Arabic, Factor Structure, Foreign Countries, High School Students
PDF pending restorationAlberta Dept. of Education, Edmonton. Language Services Branch. – 1995
The French as a Second Language model tests for beginning levels 1, 2, and 3 were designed to evaluate students' language performance, as outlined in the program of studies for Alberta, Canada, in listening and reading comprehension and oral and written production, communication skills, culture, language and general language knowledge. The tests…
Descriptors: Difficulty Level, Elementary Education, Foreign Countries, French
Olson, John F.; And Others – 1989
Traditionally, item difficulty has been defined in terms of the performance of examinees. For test development purposes, a more useful concept would be some kind of intrinsic item difficulty, defined in terms of the item's content, context, or characteristics and the task demands set by the item. In this investigation, the measurement literature…
Descriptors: Classification, Cluster Analysis, Difficulty Level, Educational Research
Mislevy, Robert J.; Wu, Pao-Kuei – 1988
The basic equations of item response theory provide a foundation for inferring examinees' abilities and items' operating characteristics from observed responses. In practice, though, examinees will usually not have provided a response to every available item--for reasons that may or may not have been intended by the test administrator, and that…
Descriptors: Ability, Adaptive Testing, Equations (Mathematics), Estimation (Mathematics)
Bennett, Randy Elliot; And Others – 1990
A framework for categorizing constructed-response items was developed in which items were ordered on a continuum from multiple-choice to presentation/performance according to the degree of constraint placed on the examinee's response. Two investigations were carried out to evaluate the validity of this framework. In the first investigation, 27…
Descriptors: Classification, Constructed Response, Models, Multiple Choice Tests
Making Use of Response Times in Standardized Tests: Are Accuracy and Speed Measuring the Same Thing?
Scrams, David J.; Schnipke, Deborah L. – 1997
Response accuracy and response speed provide separate measures of performance. Psychometricians have tended to focus on accuracy with the goal of characterizing examinees on the basis of their ability to respond correctly to items from a given content domain. With the advent of computerized testing, response times can now be recorded unobtrusively…
Descriptors: Computer Assisted Testing, Difficulty Level, Item Response Theory, Psychometrics
Kim, Seock-Ho; Cohen, Allan S. – 1997
Type I error rates of the likelihood ratio test for the detection of differential item functioning (DIF) were investigated using Monte Carlo simulations. The graded response model with five ordered categories was used to generate data sets of a 30-item test for samples of 300 and 1,000 simulated examinees. All DIF comparisons were simulated by…
Descriptors: Ability, Classification, Computer Simulation, Estimation (Mathematics)
Wise, Steven L.; And Others – 1997
The degree to which item review on a computerized adaptive test (CAT) could be used by examinees to inflate their scores artificially was studied. G. G. Kingsbury (1996) described a strategy in which examinees could use the changes in item difficulty during a CAT to determine which items' answers are incorrect and should be changed during item…
Descriptors: Achievement Gains, Adaptive Testing, College Students, Computer Assisted Testing
Matlock-Hetzel, Susan – 1997
When norm-referenced tests are developed for instructional purposes, to assess the effects of educational programs, or for educational research purposes, it can be very important to conduct item and test analyses. These analyses can evaluate the quality of items and of the test as a whole. Such analyses can also be employed to revise and improve…
Descriptors: Difficulty Level, Distractors (Tests), Elementary Secondary Education, Item Analysis
PDF pending restorationFaggen, Jane; And Others – 1995
The objective of this study was to determine the degree to which recommendations for passing scores, calculated on the basis of a traditional standard-setting methodology, might be affected by the mode (paper versus computer-screen prints) in which test items were presented to standard setting panelists. Results were based on the judgments of 31…
Descriptors: Computer Assisted Testing, Cutting Scores, Difficulty Level, Evaluators
Crehan, Kevin D.; Haladyna, Thomas M. – 1994
More attention is currently being paid to the distractors of a multiple-choice test item (Thissen, Steinberg, and Fitzpatrick, 1989). A systematic relationship exists between the keyed response and distractors in multiple-choice items (Levine and Drasgow, 1983). New scoring methods have been introduced, computer programs developed, and research…
Descriptors: Comparative Analysis, Computer Assisted Testing, Distractors (Tests), Models
Mislevy, Robert J. – 1992
A closed form approximation is given for the variance of examinee proficiency estimates in the Rasch model for dichotomous items, under the condition that only estimates, rather than true values, of item difficulty parameters are available. The term that must be added to the usual response-sampling variance is inversely proportional to both the…
Descriptors: Academic Achievement, Achievement Tests, Equations (Mathematics), Estimation (Mathematics)


