NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 7,486 to 7,500 of 9,547 results Save | Export
Martinez, Michael E.; Katz, Irvin R. – 1992
Contrasts between constructed response items and stem-equivalent multiple-choice counterparts typically have involved averaging item characteristics, and this aggregation has masked differences in statistical properties at the item level. Moreover, even aggregated format differences have not been explained in terms of differential cognitive…
Descriptors: Architecture, Cognitive Processes, Construct Validity, Constructed Response
Henning, Grant – 1991
In order to evaluate the Test of English as a Foreign Language (TOEFL) vocabulary item format and to determine the effectiveness of alternative vocabulary test items, this study investigated the functioning of eight different multiple-choice formats that differed with regard to: (1) length and inference-generating quality of the stem; (2) the…
Descriptors: Adults, Context Effect, Difficulty Level, English (Second Language)
Veerkamp, Wim J. J.; Berger, Martijn P. F. – 1994
Items with the highest discrimination parameter values in a logistic item response theory (IRT) model do not necessarily give maximum information. This paper shows which discrimination parameter values (as a function of the guessing parameter and the distance between person ability and item difficulty) give maximum information for the…
Descriptors: Ability, Adaptive Testing, Algorithms, Computer Assisted Testing
Halkitis, Perry N.; And Others – 1996
The relationship between test item characteristics and testing time was studied for a computer-administered licensing examination. One objective of the study was to develop a model to predict testing time on the basis of known item characteristics. Response latencies (i.e., the amount of time taken by examinees to read, review, and answer items)…
Descriptors: Computer Assisted Testing, Difficulty Level, Estimation (Mathematics), Licensing Examinations (Professions)
Bennett, Randy Elliot; And Others – 1989
This study examined the relationship of a machine-scorable, constrained free-response computer science item that required the student to debug a faulty program to two other types of items: multiple-choice and free-response requiring production of a computer program. The free-response items were from the College Board's Advanced Placement Computer…
Descriptors: College Students, Computer Science, Computer Software, Debugging (Computers)
Yepes-Baraya, Mario – 1997
The study described in this paper is part of an effort to improve understanding of the science assessment of the National Assessment of Educational Progress (NAEP). It involved the coding of all the items in the 1996 NAEP science assessments, which included 45 blocks (15 each for grades 4, 8, and 12) and over 500 items. Each of the approximately…
Descriptors: Coding, Elementary School Students, Grade 4, Intermediate Grades
Cox, James – 1996
This manual explains how to construct a questionnaire. It is intended for the novice researcher who has little experience in questionnaire construction. The first seven chapters discuss the following seven stages in questionnaire development: (1) establishing the guiding questions; (2) operationalizing and clarifying the guiding questions; (3)…
Descriptors: Data Analysis, Educational Research, Models, Opinions
Abedi, Jamal; And Others – 1995
This study examines the linguistic features of the National Assessment of Educational Progress (NAEP) mathematics test items and investigates the significance of language-related variables for NAEP's assessment in the content area of mathematics. The continuing increase in the number of language minority students in classrooms nationwide has…
Descriptors: Educational Assessment, Elementary Secondary Education, Language Minorities, Language Role
Sugrue, Brenda; And Others – 1995
This study aims to evaluate the degree to which the achievement level descriptions adopted by the National Assessment Governing Board (NAGB) for the 1992 assessment in mathematics accurately represent what students at a given achievement level can do. NAGB descriptions of the levels were used to form lists of statements about what students at a…
Descriptors: Educational Assessment, Elementary Secondary Education, Knowledge Representation, Mathematics Achievement
Scheuneman, Janice; And Others – 1991
To help increase the understanding of sources of difficulty in test items, a study was undertaken to evaluate the effects of various aspects of prose complexity on the difficulty of achievement test items. The items of interest were those that presented a verbal stimulus followed by a question about the stimulus and a standard set of…
Descriptors: Achievement Tests, Difficulty Level, Goodness of Fit, Knowledge Level
PDF pending restoration PDF pending restoration
Mills, Craig N.; Stocking, Martha L. – 1995
Computerized adaptive testing (CAT), while well-grounded in psychometric theory, has had few large-scale applications for high-stakes, secure tests in the past. This is now changing as the cost of computing has declined rapidly. As is always true where theory is translated into practice, many practical issues arise. This paper discusses a number…
Descriptors: Adaptive Testing, Computer Assisted Testing, High Stakes Tests, Item Banks
PDF pending restoration PDF pending restoration
Lawrence, Ida M.; And Others – 1995
This research summarizes differential item functioning (DIF) results for student produced response (SPR) items, a nonmultiple-choice mathematical item type in the Scholastic Aptitude Test I (SAT I). DIF data from 4 field trial pretest administrations (620 SPR items) and 10 final forms (100 SPR items with samples ranging from about 58,000 to over…
Descriptors: Black Students, Comparative Analysis, Item Bias, Mathematics Tests
Veerkamp, Wim J. J.; Berger, Martijn P. F. – 1994
In this study some alternative item selection criteria for adaptive testing are proposed. These criteria take into account the uncertainty of the ability estimates. A general weighted information criterion is suggested of which the usual maximum information criterion and the suggested alternative criteria are special cases. A simulation study was…
Descriptors: Ability, Adaptive Testing, Bayesian Statistics, Computer Assisted Testing
Meijer, Rob R.; And Others – 1994
Three methods for the estimation of the reliability of single dichotomous items are discussed. All methods are based on the assumptions of nondecreasing and nonintersecting item response functions and the Mokken model of double monotonicity. Based on analytical and Monte Carlo studies, it is concluded that one method is superior to the other two…
Descriptors: Estimation (Mathematics), Foreign Countries, Item Response Theory, Monte Carlo Methods
Badger, Elizabeth; Thomas, Brenda – 1992
In this digest a rationale is given for using open-ended questions in the assessment of student achievement, the use of open-ended questions in reading is discussed, and some implications for the classroom are outlined. Research has helped shift the focus from learning as content knowledge per se to learning as the ability to use and interpret…
Descriptors: Educational Assessment, Educational Research, Elementary Secondary Education, Knowledge Level
Pages: 1  |  ...  |  496  |  497  |  498  |  499  |  500  |  501  |  502  |  503  |  504  |  ...  |  637