NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 6,676 to 6,690 of 9,530 results Save | Export
Bennett, Randy Elliot – 1990
A new assessment conception is described that integrates constructed-response testing, artificial intelligence, and model-based measurement. The conception incorporates complex constructed-response items for their potential to increase the validity, instructional utility, and credibility of standardized tests. Artificial intelligence methods are…
Descriptors: Artificial Intelligence, Constructed Response, Educational Assessment, Measurement Techniques
Ban, Jae-Chun; Hanson, Bradley A.; Wang, Tianyou; Yi, Qing; Harris, Deborah J. – 2000
The purpose of this study was to compare and evaluate five online pretest item calibration/scaling methods in computerized adaptive testing (CAT): (1) the marginal maximum likelihood estimate with one-EM cycle (OEM); (2) the marginal maximum likelihood estimate with multiple EM cycles (MEM); (3) Stocking's Method A (M. Stocking, 1988); (4)…
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Estimation (Mathematics)
Colorado State Dept. of Education, Denver. – 2000
This booklet contains released test items from the spring 2000 administration of the eighth grade mathematics and science tests of the Colorado Student Assessment Program. Items are released with the correct answers, and the scoring guide is included for selections from the constructed response portion of the science test. (SLD)
Descriptors: Achievement Tests, Grade 8, Junior High Schools, Mathematics Tests
Alberta Dept. of Education, Edmonton. Student Evaluation Branch. – 1997
This booklet presents the Written Response part of the English 30 Grade 12 Diploma examination in Alberta, Canada. After instructions for students, the booklet presents the first part of the examination in which students respond to Thom Gunn's poem "Tamer and Hawk," which addresses the nature and effect of a ruling passion in an…
Descriptors: Achievement Tests, English Instruction, Grade 12, High Schools
Sireci, Stephen G.; Gonzalez, Eugenio J. – 2003
International comparative educational studies make use of test instruments originally developed in English by international panels of experts, but that are ultimately administered in the language of instruction of the students. The comparability of the different language versions of these assessments is a critical issue in validating the…
Descriptors: Academic Achievement, Comparative Analysis, Difficulty Level, International Education
Guerrero, Lourdes; Rivera, Antonio – 2001
In this paper we report some results of the analysis of released items of the Third International Mathematics and Science Study (TIMSS). This analysis is part of a research project in process related to Mexican high school curricula from an international perspective and, the mathematics students' performance in this school level. The results…
Descriptors: Curriculum Development, Educational Change, Evaluation, Foreign Countries
Um, Eunkyoung; Dogan, Enis; Im, Seongah; Tatsuoka, Kimumi; Corter, James E. – 2003
Diagnostic analyses were conducted on data from the Third International Mathematics and Science Study second population (TIMSS-R; 1999) from the United States, Korea, and the Czech Republic in terms of test item attributes (i.e., content, processing skills, and item format) and inferred students' knowledge. The Rule Space model (K. Tatsuoka, 1998)…
Descriptors: Achievement Tests, Cross Cultural Studies, Diagnostic Tests, Foreign Countries
Finney, Sara J.; Smith, Russell W.; Wise, Steven L. – 1999
Two operational item pools were used to investigate the performance of stratum computerized adaptive tests (CATs) when items were assigned to strata based on empirical estimates of item difficulty or human judgments of item difficulty. Items from the first data set consisted of 54 5-option multiple choice items from a form of the ACT mathematics…
Descriptors: Adaptive Testing, Classification, Computer Assisted Testing, High School Students
Mullens, John E.; Gayler, Keith; Goldstein, David; Hildreth, Jeanine; Rubenstein, Michael; Spiggle, Tom; Walking Eagle, Karen; Welsh, Megan – 1999
This report describes the results from an exploratory project conducted for the National Center for Education Statistics. The purpose of the project was to develop and field test questionnaire items and related methods designed to capture information about the instructional processes used nationally in 8th- to 12th-grade mathematics classrooms.…
Descriptors: Case Studies, Elementary Secondary Education, Field Tests, Instruction
Jensema, Carl J.; Burch, Robb – 1999
This final report discusses the outcomes of the third in a series of studies related to the speed with which captions appear on television programs. Video segments captioned at different speeds were shown to 1,102 subjects (aged 11-95) with and without hearing impairments, and the subjects then responded to test items based on the captions in the…
Descriptors: Adults, Age Differences, Captions, Children
Lazarte, Alejandro A. – 1999
Two experiments reproduced in a simulated computerized test-taking situation the effect of two of the main determinants in answering an item in a test: the difficulty of the item and the time available to answer it. A model is proposed for the time to respond or abandon an item and for the probability of abandoning it or answering it correctly. In…
Descriptors: Computer Assisted Testing, Difficulty Level, Higher Education, Probability
Idaho State Department of Education, 2004
This instructional support guide describes the national-level research that forms the foundation of the Idaho Reading Initiative. It includes information about the items chosen for the Idaho Reading Indicator, as well as practical suggestions to help bring best practices to the classroom. The following sections are included in the guide: (1)…
Descriptors: Program Evaluation, Phonological Awareness, Reading Tests, Test Items
Bridgeman, Brent; Trapani, Catherine; Curley, Edward – College Entrance Examination Board, 2003
The impact of allowing more time for each question on SAT® I: Reasoning Test scores was estimated by embedding sections with a reduced number of questions into the standard 30-minute equating section of two national test administrations. Thus, for example, questions were deleted from a verbal section that contained 35 questions to produce forms…
Descriptors: College Entrance Examinations, Test Items, Timed Tests, Verbal Tests
Peer reviewed Peer reviewed
Ace, Merle C.; Dawis, Rene V. – Educational and Psychological Measurement, 1973
Because no previous study was found in which both blank position in the item stem and positional placement of the correct response were studied simultaneously, it was decided to investigate the influence of these two factors, alone and in combination, on the difficulty level of verbal analogy items. (Authors)
Descriptors: Analysis of Variance, Data Analysis, Difficulty Level, Disadvantaged
Peer reviewed Peer reviewed
Frary, Robert B.; Hutchinson, T.P. – Educational and Psychological Measurement, 1982
Alternate versions of Hutchinson's theory were compared, and one which implies the existence of partial knowledge was found to be better than one which implies that an appropriate measure of ability is obtained by applying the conventional correction for guessing. (Author/PN)
Descriptors: Guessing (Tests), Latent Trait Theory, Multiple Choice Tests, Scoring Formulas
Pages: 1  |  ...  |  442  |  443  |  444  |  445  |  446  |  447  |  448  |  449  |  450  |  ...  |  636