NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 5,191 to 5,205 of 9,547 results Save | Export
Kim, Seock-Ho – 2002
Continuation ratio logits are used to model the possibilities of obtaining ordered categories in a polytomously scored item. This model is an alternative to other models for ordered category items such as the graded response model and the generalized partial credit model. The discussion includes a theoretical development of the model, a…
Descriptors: Ability, Classification, Item Response Theory, Mathematical Models
Childs, Ruth A.; Jaciw, Andrew P. – 2003
This Digest describes matrix sampling of test items as an approach to achieving broad coverage while minimizing testing time per student. Matrix sampling involves developing a complete set of items judged to cover the curriculum, then dividing the items into subsets and administering one subset to each student. Matrix sampling, by limiting the…
Descriptors: Item Banks, Matrices, Sampling, Test Construction
Perkins, Kyle; Pohlmann, John T. – 2002
The purpose of this study was to determine if the patterning of the responses of English as a Second Language (ESL) students to a reading comprehension test would change over time due to the restructuring of the subjects ESL reading comprehension competence as they increased their overall ESL proficiency. In this context, restructuring refers to…
Descriptors: Adults, Change, English (Second Language), Reading Comprehension
Hendrickson, Amy B.; Kolen, Michael J. – 2001
This study compared various equating models and procedures for a sample of data from the Medical College Admission Test(MCAT), considering how item response theory (IRT) equating results compare with classical equipercentile results and how the results based on use of various IRT models, observed score versus true score, direct versus linked…
Descriptors: Equated Scores, Higher Education, Item Response Theory, Models
Reese, Lynda M. – 1999
This study represented a first attempt to evaluate the impact of local item dependence (LID) for Item Response Theory (IRT) scoring in computerized adaptive testing (CAT). The most basic CAT design and a simplified design for simulating CAT item pools with varying degrees of LID were applied. A data generation method that allows the LID among…
Descriptors: College Entrance Examinations, Item Response Theory, Law Schools, Scoring
Bond, Lloyd – Carnegie Foundation for the Advancement of Teaching, 2004
The writer comments on the issue of high-stakes testing and the pressures on teachers to "teach to the test." Although many view teaching to the test as an all or none issue, in practice it is actually a continuum. At one end, some teachers examine the achievement objectives as described in their curriculum and then design instructional activities…
Descriptors: Testing, Standardized Tests, High Stakes Tests, Academic Achievement
Zwick, Rebecca – 1994
The Mantel Haenszel (MH; 1959) approach of Holland and Thayer (1988) is a well-established method for assessing differential item functioning (DIF). The formula for the variance of the MH DIF statistic is based on work by Phillips and Holland (1987) and Robins, Breslow, and Greenland (1986). Recent simulation studies showed that the MH variances…
Descriptors: Adaptive Testing, Evaluation Methods, Item Bias, Measurement Techniques
Campbell, Todd C. – 1995
This paper discusses alternatives to R-technique factor analysis that are applicable to counseling and psychotherapy. The traditional R-technique involves correlating columns of a data matrix. O, P, Q, S, and T techniques are discussed with particular emphasis on Q-technique. In Q-technique, people are factored across items or variables with the…
Descriptors: Counseling, Factor Analysis, Q Methodology, Research Methodology
Hendrickson, Amy B. – 2001
The purpose of the study was to compare reliability estimates for a test composed of stimulus-dependent testlets as derived from item scores, testlet scores, and under the univariate generalizability theory and multivariate generalizability theory designs, as well as to determine the influence of the number of testlets and the number of items per…
Descriptors: Comparative Analysis, Reliability, Scores, Standardized Tests
Hombo, Catherine M.; Pashley, Katharine; Jenkins, Frank – 2001
The use of grid-in formats, such as those requiring students to solve problems and fill in bubbles, is common on large-scale standardized assessments, but little is known about the use of this format with a more general population of students than high school students taking college entrance examinations, including those attending public schools…
Descriptors: Responses, Secondary Education, Secondary School Students, Standardized Tests
Kendall, John S. – 1999
This report presents an analysis of the alignment between the grade level standards of the "South Dakota Standards in Mathematics" (December 1998) and test items from the Stanford Achievement Tests, Ninth Edition (Stanford 9). The tests of interest were: (1) Form S, Primary 2 (grade 2); (2) Form S, Intermediate 1 (grade 4); (3) Form S,…
Descriptors: Academic Standards, Achievement Tests, Elementary Secondary Education, Mathematics
Allen, Sally; Sudweeks, Richard R. – 2001
A study was conducted to identify local item dependence (LID) in the context-dependent item sets used in an examination prepared for use in an introductory university physics class and to assess the effects of LID on estimates of the reliability and standard error of measurement. Test scores were obtained for 487 students in the physics class. The…
Descriptors: College Students, Error of Measurement, Higher Education, Physics
Reese, Lynda; McKinley, Robert – 1993
In item calibration using LOGIST (M. Wingersky, R. Patrick, and F. Lord, 1987), when the program determines that it cannot accurately estimate the c-parameter for a particular item due to insufficient information at the lower levels of ability, an estimate of the c-parameters, called COMC, is obtained by combining all such items. The purpose of…
Descriptors: College Entrance Examinations, Estimation (Mathematics), Law Schools, Law Students
Wen, Jian-Bing; Chang, Hua-Hua; Hau, Kit-Tai – 2000
Test security has often been a problem in computerized adaptive testing (CAT) because the traditional wisdom of item selection overly exposes high discrimination items. The a-stratified (STR) design advocated by H. Chang and his collaborators, which uses items of less discrimination in earlier stages of testing, has been shown to be very…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Estimation (Mathematics)
Hau, Kit-Tai; Wen, Jian-Bing; Chang, Hua-Hua – 2002
In the a-stratified method, a popular and efficient item exposure control strategy proposed by H. Chang (H. Chang and Z. Ying, 1999; K. Hau and H. Chang, 2001) for computerized adaptive testing (CAT), the item pool and item selection process has usually been divided into four strata and the corresponding four stages. In a series of simulation…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Estimation (Mathematics)
Pages: 1  |  ...  |  343  |  344  |  345  |  346  |  347  |  348  |  349  |  350  |  351  |  ...  |  637