NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)0
Since 2006 (last 20 years)4
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 22 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Moses, Tim; Liu, Jinghua; Tan, Adele; Deng, Weiling; Dorans, Neil J. – ETS Research Report Series, 2013
In this study, differential item functioning (DIF) methods utilizing 14 different matching variables were applied to assess DIF in the constructed-response (CR) items from 6 forms of 3 mixed-format tests. Results suggested that the methods might produce distinct patterns of DIF results for different tests and testing programs, in that the DIF…
Descriptors: Test Construction, Multiple Choice Tests, Test Items, Item Analysis
Irvin, P. Shawn; Alonzo, Julie; Lai, Cheng-Fei; Park, Bitnara Jasmine; Tindal, Gerald – Behavioral Research and Teaching, 2012
In this technical report, we present the results of a reliability study of the seventh-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,…
Descriptors: Reading Comprehension, Testing Programs, Statistical Analysis, Grade 7
Peer reviewed Peer reviewed
Direct linkDirect link
Puhan, Gautam – Applied Measurement in Education, 2009
The purpose of this study is to determine the extent of scale drift on a test that employs cut scores. It was essential to examine scale drift for this testing program because new forms in this testing program are often put on scale through a series of intermediate equatings (known as equating chains). This process may cause equating error to…
Descriptors: Testing Programs, Testing, Measurement Techniques, Item Response Theory
Niemi, David; Vallone, Julia; Wang, Jia; Griffin, Noelle – National Center for Research on Evaluation, Standards, and Student Testing (CRESST), 2007
Many districts and schools across the U. S. have begun to develop and administer assessments to complement state testing systems and provide additional information to monitor curriculum, instruction and schools. In advance of this trend, the Jackson Public Schools (JPS) district has had a district benchmark testing system in place for many years.…
Descriptors: Public Schools, Testing Programs, Educational Testing, Item Analysis
Crehan, Kevin D.; Haladyna, Thomas M. – 1994
More attention is currently being paid to the distractors of a multiple-choice test item (Thissen, Steinberg, and Fitzpatrick, 1989). A systematic relationship exists between the keyed response and distractors in multiple-choice items (Levine and Drasgow, 1983). New scoring methods have been introduced, computer programs developed, and research…
Descriptors: Comparative Analysis, Computer Assisted Testing, Distractors (Tests), Models
Klein, Stephen P.; Bolus, Roger – 1983
A solution to reduce the likelihood of one examinee copying another's answers on large scale tests that require all examinees to answer the same set of questions is to use multiple test forms that differ in terms of item ordering. This study was conducted to determine whether varying the sequence in which blocks of items were presented to…
Descriptors: Adults, Cheating, Cost Effectiveness, Item Analysis
Peer reviewed Peer reviewed
Haladyna, Thomas M. – Evaluation and the Health Professions, 1987
A set of guidelines for planning and establishing a certification testing program is presented. The guidelines are organized into three components: (1) content specification; (2) item development; and (3) test design. A mythical new certification testing program, called the Tennis Players' Certification Testing Program, is used to illustrate the…
Descriptors: Certification, Guidelines, Health Occupations, Higher Education
1978
Sample test questions are given for the Proficiency Testing Program adopted by the Idaho State Department of Education. Except for the writing skills test item, which requires a writing sample, all questions are objective, multiple choice items. Sample questions are given for reading, spelling, and mathematics. (MH)
Descriptors: Basic Skills, Criterion Referenced Tests, Minimum Competency Testing, Multiple Choice Tests
Dutcher, Peggy – 1990
Authentic reading assessment is examined, focusing on its implementation within the Michigan Essential Skills Reading Test (MESRT). Authentic reading assessment emerged as a response to research that indicates that reading is not a particular skill but an interaction among reader, text, and the context of the reading situation. Unlike formal…
Descriptors: Alternative Assessment, Elementary Secondary Education, Evaluation Research, Multiple Choice Tests
Ohio State Dept. of Education, Columbus. – 1995
This booklet contains the practice tests for the Sixth-grade Ohio Proficiency Tests. Student work on these practice tests helps the teacher evaluate how well the student understands the subjects of writing, reading, mathematics, citizenship, and science. The writing test consists of two writing tasks based on one stimulus passage, and the reading,…
Descriptors: Academic Achievement, Achievement Tests, Citizenship, Elementary School Students
New Mexico State Dept. of Education, Santa Fe. Assessment and Evaluation Unit. – 1995
The State Department of Education is releasing the domain specifications of the New Mexico High School Competency Examination (NMHSCE) for educator use because the Spring 1996 administration of the test will be in a different form. The NMHSCE is a high school graduation requirement in New Mexico. It assesses students' competence in writing,…
Descriptors: Achievement Tests, Competence, Constructed Response, Curriculum Development
Peer reviewed Peer reviewed
Yen, Wendy M. – Journal of Educational Measurement, 1993
Results from the Maryland School Performance Assessment Program for 5,392 elementary school students and from the Comprehensive Tests of Basic Skills (multiple choice) for a national sample are used to explore local item independence (LID) of test items. Some strategies are suggested for measuring LID in performance assessments. (SLD)
Descriptors: Educational Assessment, Elementary Education, Elementary School Students, Equations (Mathematics)
Stansfield, Charles W.; And Others – 1990
The development and field testing of a proficiency test in English as a Second Language for non-native speakers teaching on Guam is reported. The resulting instrument measures four language skills (listening, reading, writing, and speaking). The listening measure uses natural language that might be heard by a classroom teacher. The reading measure…
Descriptors: Databases, Elementary Secondary Education, English (Second Language), Interviews
Wilson, Linda Dager; Zhang, Liru – 1998
This study is based on data from a state-wide assessment that included both multiple-choice and constructed-response items. The intent of the study was to see whether item types make a difference in gender results. The items on both tests were categorized according to whether they assessed procedural knowledge, concepts, problem solving, or…
Descriptors: Age Differences, Constructed Response, Elementary School Students, Elementary Secondary Education
Masters, James R. – 1986
In 1985, for the first time, Pennsylvania's student assessment program included measures of a higher order thinking skills goal termed Analytical Thinking. These tests utilize a decision-making model to assess such skills as drawing inferences, identifying appropriate information to gather before making a decision, analogical reasoning,…
Descriptors: Abstract Reasoning, Academic Achievement, Age Differences, Cognitive Ability
Previous Page | Next Page ยป
Pages: 1  |  2