Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 4 |
Since 2016 (last 10 years) | 6 |
Since 2006 (last 20 years) | 15 |
Descriptor
Error of Measurement | 19 |
Regression (Statistics) | 19 |
Test Items | 19 |
Item Response Theory | 9 |
Simulation | 8 |
Test Bias | 7 |
Comparative Analysis | 5 |
Evaluation Methods | 5 |
Item Analysis | 5 |
Sample Size | 5 |
Scores | 4 |
More ▼ |
Source
Author
Publication Type
Journal Articles | 15 |
Reports - Research | 12 |
Dissertations/Theses -… | 2 |
Reports - Descriptive | 2 |
Reports - Evaluative | 2 |
Speeches/Meeting Papers | 2 |
Opinion Papers | 1 |
Education Level
Higher Education | 2 |
Postsecondary Education | 2 |
Elementary Education | 1 |
Elementary Secondary Education | 1 |
Grade 8 | 1 |
Secondary Education | 1 |
Audience
Researchers | 1 |
Location
Laws, Policies, & Programs
Assessments and Surveys
ACT Assessment | 1 |
Program for International… | 1 |
SAT (College Admission Test) | 1 |
Trends in International… | 1 |
What Works Clearinghouse Rating
Chalmers, R. Philip; Zheng, Guoguo – Applied Measurement in Education, 2023
This article presents generalizations of SIBTEST and crossing-SIBTEST statistics for differential item functioning (DIF) investigations involving more than two groups. After reviewing the original two-group setup for these statistics, a set of multigroup generalizations that support contrast matrices for joint tests of DIF are presented. To…
Descriptors: Test Bias, Test Items, Item Response Theory, Error of Measurement
Ayse Bilicioglu Gunes; Bayram Bicak – International Journal of Assessment Tools in Education, 2023
The main purpose of this study is to examine the Type I error and statistical power ratios of Differential Item Functioning (DIF) techniques based on different theories under different conditions. For this purpose, a simulation study was conducted by using Mantel-Haenszel (MH), Logistic Regression (LR), Lord's [chi-squared], and Raju's Areas…
Descriptors: Test Items, Item Response Theory, Error of Measurement, Test Bias
Koziol, Natalie A.; Goodrich, J. Marc; Yoon, HyeonJin – Educational and Psychological Measurement, 2022
Differential item functioning (DIF) is often used to examine validity evidence of alternate form test accommodations. Unfortunately, traditional approaches for evaluating DIF are prone to selection bias. This article proposes a novel DIF framework that capitalizes on regression discontinuity design analysis to control for selection bias. A…
Descriptors: Regression (Statistics), Item Analysis, Validity, Testing Accommodations
Yesiltas, Gonca; Paek, Insu – Educational and Psychological Measurement, 2020
A log-linear model (LLM) is a well-known statistical method to examine the relationship among categorical variables. This study investigated the performance of LLM in detecting differential item functioning (DIF) for polytomously scored items via simulations where various sample sizes, ability mean differences (impact), and DIF types were…
Descriptors: Simulation, Sample Size, Item Analysis, Scores
Abulela, Mohammed A. A.; Rios, Joseph A. – Applied Measurement in Education, 2022
When there are no personal consequences associated with test performance for examinees, rapid guessing (RG) is a concern and can differ between subgroups. To date, the impact of differential RG on item-level measurement invariance has received minimal attention. To that end, a simulation study was conducted to examine the robustness of the…
Descriptors: Comparative Analysis, Robustness (Statistics), Nonparametric Statistics, Item Analysis
Kim, Jihye; Oshima, T. C. – Educational and Psychological Measurement, 2013
In a typical differential item functioning (DIF) analysis, a significance test is conducted for each item. As a test consists of multiple items, such multiple testing may increase the possibility of making a Type I error at least once. The goal of this study was to investigate how to control a Type I error rate and power using adjustment…
Descriptors: Test Bias, Test Items, Statistical Analysis, Error of Measurement
Li, Yanju; Brooks, Gordon P.; Johanson, George A. – Educational and Psychological Measurement, 2012
In 2009, DeMars stated that when impact exists there will be Type I error inflation, especially with larger sample sizes and larger discrimination parameters for items. One purpose of this study is to present the patterns of Type I error rates using Mantel-Haenszel (MH) and logistic regression (LR) procedures when the mean ability between the…
Descriptors: Error of Measurement, Test Bias, Test Items, Regression (Statistics)
Quesen, Sarah – ProQuest LLC, 2016
When studying differential item functioning (DIF) with students with disabilities (SWD) focal groups typically suffer from small sample size, whereas the reference group population is usually large. This makes it possible for a researcher to select a sample from the reference population to be similar to the focal group on the ability scale. Doing…
Descriptors: Test Items, Academic Accommodations (Disabilities), Testing Accommodations, Disabilities
Svetina, Dubravka; Rutkowski, Leslie – Large-scale Assessments in Education, 2014
Background: When studying student performance across different countries or cultures, an important aspect for comparisons is that of score comparability. In other words, it is imperative that the latent variable (i.e., construct of interest) is understood and measured equivalently across all participating groups or countries, if our inferences…
Descriptors: Test Items, Item Response Theory, Item Analysis, Regression (Statistics)
Diakow, Ronli Phyllis – ProQuest LLC, 2013
This dissertation comprises three papers that propose, discuss, and illustrate models to make improved inferences about research questions regarding student achievement in education. Addressing the types of questions common in educational research today requires three different "extensions" to traditional educational assessment: (1)…
Descriptors: Inferences, Educational Assessment, Academic Achievement, Educational Research
Vaughn, Brandon K.; Wang, Qiu – Educational and Psychological Measurement, 2010
A nonparametric tree classification procedure is used to detect differential item functioning for items that are dichotomously scored. Classification trees are shown to be an alternative procedure to detect differential item functioning other than the use of traditional Mantel-Haenszel and logistic regression analysis. A nonparametric…
Descriptors: Test Bias, Classification, Nonparametric Statistics, Regression (Statistics)
van der Linden, Wim J. – Measurement: Interdisciplinary Research and Perspectives, 2010
The traditional way of equating the scores on a new test form X to those on an old form Y is equipercentile equating for a population of examinees. Because the population is likely to change between the two administrations, a popular approach is to equate for a "synthetic population." The authors of the articles in this issue of the…
Descriptors: Test Format, Equated Scores, Population Distribution, Population Trends
Culpepper, Steven Andrew – Multivariate Behavioral Research, 2009
This study linked nonlinear profile analysis (NPA) of dichotomous responses with an existing family of item response theory models and generalized latent variable models (GLVM). The NPA method offers several benefits over previous internal profile analysis methods: (a) NPA is estimated with maximum likelihood in a GLVM framework rather than…
Descriptors: Profiles, Item Response Theory, Models, Maximum Likelihood Statistics
Lu, Irene R. R.; Thomas, D. Roland – Structural Equation Modeling: A Multidisciplinary Journal, 2008
This article considers models involving a single structural equation with latent explanatory and/or latent dependent variables where discrete items are used to measure the latent variables. Our primary focus is the use of scores as proxies for the latent variables and carrying out ordinary least squares (OLS) regression on such scores to estimate…
Descriptors: Least Squares Statistics, Computation, Item Response Theory, Structural Equation Models
Reckase, Mark D. – Educational Measurement: Issues and Practice, 2006
Schulz (2006) provides a different perspective on standard setting than that provided in Reckase (2006). He also suggests a modification to the bookmark procedure and some alternative models for errors in panelists' judgments than those provided by Reckase. This article provides a response to some of the points made by Schulz and reports some…
Descriptors: Evaluation Methods, Standard Setting, Reader Response, Regression (Statistics)
Previous Page | Next Page ยป
Pages: 1 | 2