NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 18 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Lions, Séverin; Dartnell, Pablo; Toledo, Gabriela; Godoy, María Inés; Córdova, Nora; Jiménez, Daniela; Lemarié, Julie – Educational and Psychological Measurement, 2023
Even though the impact of the position of response options on answers to multiple-choice items has been investigated for decades, it remains debated. Research on this topic is inconclusive, perhaps because too few studies have obtained experimental data from large-sized samples in a real-world context and have manipulated the position of both…
Descriptors: Multiple Choice Tests, Test Items, Item Analysis, Responses
Peer reviewed Peer reviewed
Direct linkDirect link
Abulela, Mohammed A. A.; Rios, Joseph A. – Applied Measurement in Education, 2022
When there are no personal consequences associated with test performance for examinees, rapid guessing (RG) is a concern and can differ between subgroups. To date, the impact of differential RG on item-level measurement invariance has received minimal attention. To that end, a simulation study was conducted to examine the robustness of the…
Descriptors: Comparative Analysis, Robustness (Statistics), Nonparametric Statistics, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Finch, Holmes; French, Brian F. – Applied Measurement in Education, 2019
The usefulness of item response theory (IRT) models depends, in large part, on the accuracy of item and person parameter estimates. For the standard 3 parameter logistic model, for example, these parameters include the item parameters of difficulty, discrimination, and pseudo-chance, as well as the person ability parameter. Several factors impact…
Descriptors: Item Response Theory, Accuracy, Test Items, Difficulty Level
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Dirlik, Ezgi Mor – International Journal of Progressive Education, 2019
Item response theory (IRT) has so many advantages than its precedent Classical Test Theory (CTT) such as non-changing item parameters, ability parameter estimations free from the items. However, in order to get these advantages, some assumptions should be met and they are; unidimensionality, normality and local independence. However, it is not…
Descriptors: Comparative Analysis, Nonparametric Statistics, Item Response Theory, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Dodge, Nadine; Chapman, Ralph – International Journal of Social Research Methodology, 2018
Electronically assisted survey techniques offer several advantages over traditional survey techniques. However, they can also potentially introduce biases, such as coverage biases and measurement error. The current study compares the relative merits of two survey distribution and completion modes: email recruitment with internet completion; and…
Descriptors: Online Surveys, Handheld Devices, Bias, Electronic Mail
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kim, Sooyeon; Moses, Tim; Yoo, Hanwook Henry – ETS Research Report Series, 2015
The purpose of this inquiry was to investigate the effectiveness of item response theory (IRT) proficiency estimators in terms of estimation bias and error under multistage testing (MST). We chose a 2-stage MST design in which 1 adaptation to the examinees' ability levels takes place. It includes 4 modules (1 at Stage 1, 3 at Stage 2) and 3 paths…
Descriptors: Item Response Theory, Computation, Statistical Bias, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Paek, Insu; Cai, Li – Educational and Psychological Measurement, 2014
The present study was motivated by the recognition that standard errors (SEs) of item response theory (IRT) model parameters are often of immediate interest to practitioners and that there is currently a lack of comparative research on different SE (or error variance-covariance matrix) estimation procedures. The present study investigated item…
Descriptors: Item Response Theory, Comparative Analysis, Error of Measurement, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Zhang, Jinming; Li, Jie – Journal of Educational Measurement, 2016
An IRT-based sequential procedure is developed to monitor items for enhancing test security. The procedure uses a series of statistical hypothesis tests to examine whether the statistical characteristics of each item under inspection have changed significantly during CAT administration. This procedure is compared with a previously developed…
Descriptors: Computer Assisted Testing, Test Items, Difficulty Level, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Levy, Roy – Educational Psychologist, 2016
In this article, I provide a conceptually oriented overview of Bayesian approaches to statistical inference and contrast them with frequentist approaches that currently dominate conventional practice in educational research. The features and advantages of Bayesian approaches are illustrated with examples spanning several statistical modeling…
Descriptors: Bayesian Statistics, Models, Educational Research, Innovation
Peer reviewed Peer reviewed
Direct linkDirect link
He, Qingping; Anwyll, Steve; Glanville, Matthew; Opposs, Dennis – Research Papers in Education, 2014
Since 2010, the whole national cohort Key Stage 2 (KS2) National Curriculum test in science in England has been replaced with a sampling test taken by pupils at the age of 11 from a nationally representative sample of schools annually. The study reported in this paper compares the performance of different subgroups of the samples (classified by…
Descriptors: National Curriculum, Sampling, Foreign Countries, Factor Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
DeMars, Christine E. – Structural Equation Modeling: A Multidisciplinary Journal, 2012
In structural equation modeling software, either limited-information (bivariate proportions) or full-information item parameter estimation routines could be used for the 2-parameter item response theory (IRT) model. Limited-information methods assume the continuous variable underlying an item response is normally distributed. For skewed and…
Descriptors: Item Response Theory, Structural Equation Models, Computation, Computer Software
Peer reviewed Peer reviewed
Direct linkDirect link
Papadopoulos, Timothy C.; Kendeou, Panayiota; Spanoudis, George – Journal of Educational Psychology, 2012
Theory-driven conceptualizations of phonological abilities in a sufficiently transparent language (Greek) were examined in children ages 5 years 8 months to 7 years 7 months, by comparing a set of a priori models. Specifically, the fit of 9 different models was evaluated, as defined by the Number of Factors (1 to 3; represented by rhymes,…
Descriptors: Evidence, Reading Fluency, Phonemes, Factor Structure
Peer reviewed Peer reviewed
Direct linkDirect link
Stubbe, Tobias C. – Educational Research and Evaluation, 2011
The challenge inherent in cross-national research of providing instruments in different languages measuring the same construct is well known. But even instruments in a single language may be biased towards certain countries or regions due to local linguistic specificities. Consequently, it may be appropriate to use different versions of an…
Descriptors: Test Items, International Studies, Foreign Countries, German
Peer reviewed Peer reviewed
Direct linkDirect link
Culpepper, Steven Andrew – Multivariate Behavioral Research, 2009
This study linked nonlinear profile analysis (NPA) of dichotomous responses with an existing family of item response theory models and generalized latent variable models (GLVM). The NPA method offers several benefits over previous internal profile analysis methods: (a) NPA is estimated with maximum likelihood in a GLVM framework rather than…
Descriptors: Profiles, Item Response Theory, Models, Maximum Likelihood Statistics
Roos, Linda L.; Wise, Steven L.; Finney, Sara J. – 1998
Previous studies have shown that, when administered a self-adapted test, a few examinees will choose item difficulty levels that are not well-matched to their proficiencies, resulting in high standard errors of proficiency estimation. This study investigated whether the previously observed effects of a self-adapted test--lower anxiety and higher…
Descriptors: Adaptive Testing, College Students, Comparative Analysis, Computer Assisted Testing
Previous Page | Next Page »
Pages: 1  |  2