Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 3 |
Descriptor
Error of Measurement | 5 |
Simulation | 5 |
Test Results | 5 |
Item Response Theory | 3 |
Evaluation Methods | 2 |
Maximum Likelihood Statistics | 2 |
Multiple Regression Analysis | 2 |
Statistical Bias | 2 |
Test Bias | 2 |
Testing | 2 |
Computation | 1 |
More ▼ |
Author
Woods, Carol M. | 2 |
Li, Yuan H. | 1 |
Schafer, William D. | 1 |
Stone-Romero, Eugene F. | 1 |
Willse, John T. | 1 |
You, Soon-Hyung | 1 |
Publication Type
Journal Articles | 4 |
Reports - Evaluative | 2 |
Reports - Research | 2 |
Reports - Descriptive | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Audience
Practitioners | 1 |
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Willse, John T. – Measurement and Evaluation in Counseling and Development, 2017
This article provides a brief introduction to the Rasch model. Motivation for using Rasch analyses is provided. Important Rasch model concepts and key aspects of result interpretation are introduced, with major points reinforced using a simulation demonstration. Concrete guidelines are provided regarding sample size and the evaluation of items.
Descriptors: Item Response Theory, Test Results, Test Interpretation, Simulation
Woods, Carol M. – Applied Psychological Measurement, 2011
Differential item functioning (DIF) occurs when an item on a test, questionnaire, or interview has different measurement properties for one group of people versus another, irrespective of true group-mean differences on the constructs being measured. This article is focused on item response theory based likelihood ratio testing for DIF (IRT-LR or…
Descriptors: Simulation, Item Response Theory, Testing, Questionnaires
Woods, Carol M. – Applied Psychological Measurement, 2009
Differential item functioning (DIF) occurs when items on a test or questionnaire have different measurement properties for one group of people versus another, irrespective of group-mean differences on the construct. Methods for testing DIF require matching members of different groups on an estimate of the construct. Preferably, the estimate is…
Descriptors: Test Results, Testing, Item Response Theory, Test Bias
Li, Yuan H.; Schafer, William D. – 2002
An empirical study of the Yen (W. Yen, 1997) analytic formula for the standard error of a percent-above-cut [SE(PAC)] was conducted. This formula was derived from variance component information gathered in the context of generalizability theory. SE(PAC)s were estimated by different methods of estimating variance components (e.g., W. Yens…
Descriptors: Cutting Scores, Error of Measurement, Generalizability Theory, Simulation

You, Soon-Hyung; Stone-Romero, Eugene F. – Educational and Psychological Measurement, 1996
To clarify the findings of R. Gillett (1991) about the inequality of the means of test scores of minority and majority examinees, the standard errors of the quota-selected sample means and the sampling distribution of these means were studied through Monte Carlo simulation. Results explain that the quota selection inequality results from…
Descriptors: Error of Measurement, Minority Groups, Monte Carlo Methods, Sampling