Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 5 |
Since 2006 (last 20 years) | 7 |
Descriptor
Source
Applied Measurement in… | 20 |
Author
Wise, Steven L. | 3 |
Haladyna, Thomas M. | 2 |
Twing, Jon S. | 2 |
Cohen, Dale J. | 1 |
Cruse, Keith L. | 1 |
Feldt, Leonard | 1 |
Gallant, Dorinda J. | 1 |
Gao, Lingyun | 1 |
Gao, Xiaohong | 1 |
Green, Donald Ross | 1 |
Greiff, Samuel | 1 |
More ▼ |
Publication Type
Journal Articles | 20 |
Reports - Research | 11 |
Reports - Evaluative | 6 |
Reports - Descriptive | 3 |
Information Analyses | 2 |
Historical Materials | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Secondary Education | 3 |
Elementary Education | 2 |
Elementary Secondary Education | 2 |
Grade 5 | 2 |
Higher Education | 2 |
Middle Schools | 2 |
Postsecondary Education | 2 |
Grade 10 | 1 |
Grade 8 | 1 |
High Schools | 1 |
Intermediate Grades | 1 |
More ▼ |
Audience
Laws, Policies, & Programs
Assessments and Surveys
Program for International… | 2 |
Texas Assessment of Academic… | 2 |
Iowa Tests of Basic Skills | 1 |
Iowa Tests of Educational… | 1 |
SAT (College Admission Test) | 1 |
What Works Clearinghouse Rating
Haladyna, Thomas M.; Rodriguez, Michael C.; Stevens, Craig – Applied Measurement in Education, 2019
The evidence is mounting regarding the guidance to employ more three-option multiple-choice items. From theoretical analyses, empirical results, and practical considerations, such items are of equal or higher quality than four- or five-option items, and more items can be administered to improve content coverage. This study looks at 58 tests,…
Descriptors: Multiple Choice Tests, Test Items, Testing Problems, Guessing (Tests)
Wise, Steven L.; Kuhfeld, Megan R.; Soland, James – Applied Measurement in Education, 2019
When we administer educational achievement tests, we want to be confident that the resulting scores validly indicate what the test takers know and can do. However, if the test is perceived as low stakes by the test taker, disengaged test taking sometimes occurs, which poses a serious threat to score validity. When computer-based tests are used,…
Descriptors: Guessing (Tests), Computer Assisted Testing, Achievement Tests, Scores
Wise, Steven L.; Gao, Lingyun – Applied Measurement in Education, 2017
There has been an increased interest in the impact of unmotivated test taking on test performance and score validity. This has led to the development of new ways of measuring test-taking effort based on item response time. In particular, Response Time Effort (RTE) has been shown to provide an assessment of effort down to the level of individual…
Descriptors: Test Bias, Computer Assisted Testing, Item Response Theory, Achievement Tests
Cohen, Dale J.; Zhang, Jin; Wothke, Werner – Applied Measurement in Education, 2019
Construct-irrelevant cognitive complexity of some items in the statewide grade-level assessments may impose performance barriers for students with disabilities who are ineligible for alternate assessments based on alternate achievement standards. This has spurred research into whether items can be modified to reduce complexity without affecting…
Descriptors: Test Items, Accessibility (for Disabled), Students with Disabilities, Low Achievement
Herde, Christoph Nils; Wüstenberg, Sascha; Greiff, Samuel – Applied Measurement in Education, 2016
Complex Problem Solving (CPS) is seen as a cross-curricular 21st century skill that has attracted interest in large-scale-assessments. In the Programme for International Student Assessment (PISA) 2012, CPS was assessed all over the world to gain information on students' skills to acquire and apply knowledge while dealing with nontransparent…
Descriptors: Problem Solving, Achievement Tests, Foreign Countries, International Assessment
Wan, Lei; Henly, George A. – Applied Measurement in Education, 2012
Many innovative item formats have been proposed over the past decade, but little empirical research has been conducted on their measurement properties. This study examines the reliability, efficiency, and construct validity of two innovative item formats--the figural response (FR) and constructed response (CR) formats used in a K-12 computerized…
Descriptors: Test Items, Test Format, Computer Assisted Testing, Measurement
Wise, Steven L.; Pastor, Dena A.; Kong, Xiaojing J. – Applied Measurement in Education, 2009
Previous research has shown that rapid-guessing behavior can degrade the validity of test scores from low-stakes proficiency tests. This study examined, using hierarchical generalized linear modeling, examinee and item characteristics for predicting rapid-guessing behavior. Several item characteristics were found significant; items with more text…
Descriptors: Guessing (Tests), Achievement Tests, Correlation, Test Items
Miller, G. Edward; Yoes, Michael E.; Twing, Jon S. – Applied Measurement in Education, 2004
Two models are presented in this article for estimating the proportion of students who would pass all of three or more content area tests given that none have actually been tested in more than two of the content areas. The first model allows one to estimate the proportion of students who would pass all of three or more content area tests from the…
Descriptors: Scores, Standardized Tests, Student Evaluation, Testing Programs

Green, Donald Ross; And Others – Applied Measurement in Education, 1989
Potential benefits of using item response theory in test construction are evaluated using the experience and evidence accumulated during nine years of using a three-parameter model in the development of major achievement batteries. Topics addressed include error of measurement, test equating, item bias, and item difficulty. (TJH)
Descriptors: Achievement Tests, Computer Assisted Testing, Difficulty Level, Equated Scores

Ward, Cynthia A. – Applied Measurement in Education, 2000
Discusses the implications of the "GI Forum v. Texas Education Agency" (2000) decision supporting the use of the Texas Assessment of Academic Skills for other state assessment programs. Notes that the legal success of the state test in Texas was not due merely to good fortune, but to the test's adherence to legally defensible principles.…
Descriptors: Achievement Tests, Court Litigation, Elementary Secondary Education, Legal Problems

Porter, Rosalie P. – Applied Measurement in Education, 2000
Describes approaches taken in Texas to bring about academic accountability for students of limited English proficiency through evaluating and reporting annually on their progress in English-language literacy and their learning of school subjects and by documenting the growth in successful performance on state tests by this special population. (SLD)
Descriptors: Academic Accommodations (Disabilities), Academic Achievement, Accountability, Achievement Tests
Huynh, Huynh; Meyer, J. Patrick; Gallant, Dorinda J. – Applied Measurement in Education, 2004
This study examined the effect of oral administration accommodations on test structure and student performance on the mathematics portion of the South Carolina High School Exit Examination (HSEE). The examination was given at Grade 10 and was untimed. Three groups of students were studied. Two groups took the regular form. One group had recorded…
Descriptors: Grade 8, Grade 10, Mathematics Tests, Disabilities

Cruse, Keith L.; Twing, Jon S. – Applied Measurement in Education, 2000
Provides a chronological summary of the evolution of statewide achievement tests in Texas to facilitate understanding of the issues behind the "GI Forum v. Texas Education Agency" litigation (1999), which challenged the curricular links of the Texas Assessment of Academic Skills and other "opportunity to learn" issues. (SLD)
Descriptors: Achievement Tests, Educational History, Elementary Secondary Education, Graduation Requirements

Jones, Lyle V. – Applied Measurement in Education, 1988
Use of multiple-choice achievement tests is critiqued. Multiple-choice tests are considered heavily weighted toward aptitude and ill-suited to assessment of thinking. Psychometric methods for the development of alternatives to this inadequate form of testing achievement are discussed. (TJH)
Descriptors: Achievement Tests, Creative Thinking, Educational Assessment, Educational Research

Way, Walter D.; And Others – Applied Measurement in Education, 1989
The effects of using item response theory (IRT) ability estimates based on customized tests formed by selecting areas from a nationally standardized achievement test were examined. For some populations, in some conditions, IRT ability estimates can be equivalent to scores based on full-length tests. (SLD)
Descriptors: Achievement Tests, Adaptive Testing, Content Validity, Elementary Education
Previous Page | Next Page »
Pages: 1 | 2