Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 4 |
Descriptor
Research Methodology | 10 |
Test Items | 10 |
Test Bias | 4 |
Test Validity | 4 |
Evaluation Methods | 3 |
Item Response Theory | 3 |
Measurement Techniques | 3 |
Psychometrics | 3 |
Achievement Tests | 2 |
Computation | 2 |
Error Patterns | 2 |
More ▼ |
Source
Educational and Psychological… | 10 |
Author
Hambleton, Ronald K. | 2 |
Bhola, Dennison S. | 1 |
Feldt, Leonard S. | 1 |
Hauser, Carl | 1 |
He, Wei | 1 |
Kiers, Henk A. L. | 1 |
Kong, Xiaojing J. | 1 |
Laughlin, James E. | 1 |
Ma, Lingling | 1 |
Normand, Jacques | 1 |
Oosterhof, Albert C. | 1 |
More ▼ |
Publication Type
Journal Articles | 10 |
Reports - Research | 5 |
Reports - Evaluative | 4 |
Reports - Descriptive | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Elementary Secondary Education | 1 |
Higher Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
SRA Achievement Series | 1 |
What Works Clearinghouse Rating
Hauser, Carl; Thum, Yeow Meng; He, Wei; Ma, Lingling – Educational and Psychological Measurement, 2015
When conducting item reviews, analysts evaluate an array of statistical and graphical information to assess the fit of a field test (FT) item to an item response theory model. The process can be tedious, particularly when the number of human reviews (HR) to be completed is large. Furthermore, such a process leads to decisions that are susceptible…
Descriptors: Test Items, Item Response Theory, Research Methodology, Decision Making
Stuive, Ilse; Kiers, Henk A. L.; Timmerman, Marieke E. – Educational and Psychological Measurement, 2009
A common question in test evaluation is whether an a priori assignment of items to subtests is supported by empirical data. If the analysis results indicate the assignment of items to subtests under study is not supported by data, the assignment is often adjusted. In this study the authors compare two methods on the quality of their suggestions to…
Descriptors: Simulation, Item Response Theory, Test Items, Factor Analysis
Yoo, Jin Eun – Educational and Psychological Measurement, 2009
This Monte Carlo study investigates the beneficiary effect of including auxiliary variables during estimation of confirmatory factor analysis models with multiple imputation. Specifically, it examines the influence of sample size, missing rates, missingness mechanism combinations, missingness types (linear or convex), and the absence or presence…
Descriptors: Monte Carlo Methods, Research Methodology, Test Validity, Factor Analysis

Zenisky, April L.; Hambleton, Ronald K.; Robin, Frederic – Educational and Psychological Measurement, 2003
Studied a two-stage methodology for evaluating differential item functioning (DIF) in large-scale assessment data using a sample of 60,000 students taking a large-scale assessment. Findings illustrate the merit of iterative approached for DIF detection, since items identified at one stage were not necessarily the same as those identified at the…
Descriptors: Item Bias, Large Scale Assessment, Research Methodology, Test Items

Oosterhof, Albert C.; And Others – Educational and Psychological Measurement, 1984
A supplementary treatment is proposed which helps identify sources of bias affecting groups of test items. This treatment is illustrated with the tranformed item-difficulty method as applied to an evaluation of a test used to help select applicants to be admitted to an aviation training program. (Author/BW)
Descriptors: Adults, Aptitude Tests, Difficulty Level, Graphs
Kong, Xiaojing J.; Wise, Steven L.; Bhola, Dennison S. – Educational and Psychological Measurement, 2007
This study compared four methods for setting item response time thresholds to differentiate rapid-guessing behavior from solution behavior. Thresholds were either (a) common for all test items, (b) based on item surface features such as the amount of reading required, (c) based on visually inspecting response time frequency distributions, or (d)…
Descriptors: Test Items, Reaction Time, Timed Tests, Item Response Theory

Roberts, James S.; Laughlin, James E.; Wedell, Douglas H. – Educational and Psychological Measurement, 1999
Highlights the theoretical differences between the approaches of R. Likert (1932) and L. Thurstone (1928) to attitude measurement. Uses real and simulated data on attitudes toward abortion to illustrate that attitude researchers should pay more attention to the empirical-response characteristics of items on a Likert attitude questionnaire. (SLD)
Descriptors: Abortions, Attitude Measures, Attitudes, Likert Scales
Feldt, Leonard S. – Educational and Psychological Measurement, 2005
To meet the requirements of the No Child Left Behind Act, school districts and states must compile summary reports of the levels of student achievement in reading and mathematics. The levels are to be described in broad categories: "basic and below," "proficient," or "advanced." Educational units are given considerable latitude in defining the…
Descriptors: Federal Legislation, Academic Achievement, Test Items, Test Validity

Raju, Nambury S.; Normand, Jacques – Educational and Psychological Measurement, 1985
The Regression Model, popular in selection bias research, is proposed for use in item bias detection, providing a common framework for both types of bias. An empirical test of this new method, now called the Regression Bias method, and a comparison with other commonly used item bias detection methods are presented. (Author/BS)
Descriptors: Achievement Tests, Intermediate Grades, Item Analysis, Junior High Schools

Rogers, H. Jane; Hambleton, Ronald K. – Educational and Psychological Measurement, 1989
The validity of logistic test models and computer simulation methods for generating sampling distributions of item bias statistics was evaluated under the hypothesis of no item bias. Test data from 937 ninth-grade students were used to develop 7 steps for applying computer-simulated baseline statistics in test development. (SLD)
Descriptors: Computer Simulation, Educational Research, Evaluation Methods, Grade 9