NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 8 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Meyers, Jason L.; Miller, G. Edward; Way, Walter D. – Applied Measurement in Education, 2009
In operational testing programs using item response theory (IRT), item parameter invariance is threatened when an item appears in a different location on the live test than it did when it was field tested. This study utilizes data from a large state's assessments to model change in Rasch item difficulty (RID) as a function of item position change,…
Descriptors: Test Items, Test Content, Testing Programs, Simulation
Crislip, Marian A.; Chin-Chance, Selvin – 2001
This paper discusses the use of two theories of item analysis and test construction, their strengths and weaknesses, and applications to the design of the Hawaii State Test of Essential Competencies (HSTEC). Traditional analyses of the data collected from the HSTEC field test were viewed from the perspectives of item difficulty levels and item…
Descriptors: Difficulty Level, Item Response Theory, Psychometrics, Reliability
Peer reviewed Peer reviewed
Slinde, Jefferey A.; Linn, Robert L. – Journal of Educational Measurement, 1978
Use of the Rasch model for vertical equating of tests is discussed. Although use of the model is promising, empirical results raise questions about the adequacy of the Rasch model. Latent trait models with more parameters may be necessary. (JKS)
Descriptors: Achievement Tests, Difficulty Level, Equated Scores, Higher Education
Cope, Ronald T. – 1995
This paper deals with the problems that arise in performance assessment from the granularity that results from having a small number of tasks or prompts and raters of responses to these tasks or prompts. Two problems are discussed in detail: (1) achieving a satisfactory degree of reliability; and (2) equating or adjusting for differences of…
Descriptors: Difficulty Level, Educational Assessment, Equated Scores, High Stakes Tests
Taleporos, Betsy; And Others – 1988
In the spring of 1986, New York City began using the Metropolitan Achievement Test-6 (MAT-6) series to assess achievement in mathematics as part of a continuing end-of-year testing program. During the first two years of the program, appropriate levels of the shelf version of MAT-6 (Forms L and M) were administered to second through eighth graders.…
Descriptors: Difficulty Level, Elementary Education, Elementary School Mathematics, Elementary School Students
Steele, D. Joyce – 1985
This paper contains a comparison of descriptive information based on analyses of pilot and live administrations of the Alabama High School Graduation Examination (AHSGE). The test is composed of three subject tests: Reading, Mathematics, and Language. The study was intended to validate the test development procedure by comparing difficulty levels…
Descriptors: Achievement Tests, Comparative Testing, Difficulty Level, Graduation Requirements
Steele, D. Joyce – 1991
This paper compares descriptive information based on analyses of the pilot and live administrations of the Alabama High School Graduation Examination (AHSGE). The AHSGE, a product of decisions made in 1977 and 1984 by the Alabama State Board of Education, is composed of subject tests in reading, mathematics, and language. The pass score for each…
Descriptors: Comparative Testing, Difficulty Level, Grade 11, Graduation Requirements
Nassif, Paula M.; And Others – 1979
A procedure which employs a method of item substitution based on item difficulty is recommended for developing parallel criterion referenced test forms. This procedure is currently being used in the Florida functional literacy testing program and the Georgia teacher certification testing program. Reasons for developing parallel test forms involve…
Descriptors: Criterion Referenced Tests, Difficulty Level, Equated Scores, Functional Literacy