NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)2
Since 2006 (last 20 years)18
Audience
Practitioners1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 38 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Chen, Chia-Wen; Wang, Wen-Chung; Chiu, Ming Ming; Ro, Sage – Journal of Educational Measurement, 2020
The use of computerized adaptive testing algorithms for ranking items (e.g., college preferences, career choices) involves two major challenges: unacceptably high computation times (selecting from a large item pool with many dimensions) and biased results (enhanced preferences or intensified examinee responses because of repeated statements across…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Selection
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Keller, Bryan; Chen, Jianshen – Society for Research on Educational Effectiveness, 2016
Observational studies are common in educational research, where subjects self-select or are otherwise non-randomly assigned to different interventions (e.g., educational programs, grade retention, special education). Unbiased estimation of a causal effect with observational data depends crucially on the assumption of ignorability, which specifies…
Descriptors: Computation, Influences, Observation, Data
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Steiner, Peter M.; Kim, Jee-Seon – Society for Research on Educational Effectiveness, 2015
Despite the popularity of propensity score (PS) techniques they are not yet well studied for matching multilevel data where selection into treatment takes place among level-one units within clusters. This paper suggests a PS matching strategy that tries to avoid the disadvantages of within- and across-cluster matching. The idea is to first…
Descriptors: Computation, Outcomes of Treatment, Multivariate Analysis, Probability
Peer reviewed Peer reviewed
Direct linkDirect link
Bar-Hillel, Maya; Peer, Eyal; Acquisti, Alessandro – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2014
When asked to mentally simulate coin tosses, people generate sequences that differ systematically from those generated by fair coins. It has been rarely noted that this divergence is apparent already in the very 1st mental toss. Analysis of several existing data sets reveals that about 80% of respondents start their sequence with Heads. We…
Descriptors: Bias, Selection, Cognitive Processes, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Kopf, Julia; Zeileis, Achim; Strobl, Carolin – Educational and Psychological Measurement, 2015
Differential item functioning (DIF) indicates the violation of the invariance assumption, for instance, in models based on item response theory (IRT). For item-wise DIF analysis using IRT, a common metric for the item parameters of the groups that are to be compared (e.g., for the reference and the focal group) is necessary. In the Rasch model,…
Descriptors: Test Items, Equated Scores, Test Bias, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Steiner, Peter M.; Cook, Thomas D.; Li, Wei; Clark, M. H. – Journal of Research on Educational Effectiveness, 2015
In observational studies, selection bias will be completely removed only if the selection mechanism is ignorable, namely, all confounders of treatment selection and potential outcomes are reliably measured. Ideally, well-grounded substantive theories about the selection process and outcome-generating model are used to generate the sample of…
Descriptors: Quasiexperimental Design, Bias, Selection, Observation
Peer reviewed Peer reviewed
Direct linkDirect link
Seo, Dong Gi; Weiss, David J. – Educational and Psychological Measurement, 2015
Most computerized adaptive tests (CATs) have been studied using the framework of unidimensional item response theory. However, many psychological variables are multidimensional and might benefit from using a multidimensional approach to CATs. This study investigated the accuracy, fidelity, and efficiency of a fully multidimensional CAT algorithm…
Descriptors: Computer Assisted Testing, Adaptive Testing, Accuracy, Fidelity
Peer reviewed Peer reviewed
Direct linkDirect link
Chen, Pei-Hua; Chang, Hua-Hua; Wu, Haiyan – Educational and Psychological Measurement, 2012
Two sampling-and-classification-based procedures were developed for automated test assembly: the Cell Only and the Cell and Cube methods. A simulation study based on a 540-item bank was conducted to compare the performance of the procedures with the performance of a mixed-integer programming (MIP) method for assembling multiple parallel test…
Descriptors: Test Items, Selection, Test Construction, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Wen-Chung; Jin, Kuan-Yu; Qiu, Xue-Lan; Wang, Lei – Journal of Educational Measurement, 2012
In some tests, examinees are required to choose a fixed number of items from a set of given items to answer. This practice creates a challenge to standard item response models, because more capable examinees may have an advantage by making wiser choices. In this study, we developed a new class of item response models to account for the choice…
Descriptors: Item Response Theory, Test Items, Selection, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Hung-Yu; Chen, Po-Hsi; Wang, Wen-Chung – Applied Psychological Measurement, 2012
In the human sciences, a common assumption is that latent traits have a hierarchical structure. Higher order item response theory models have been developed to account for this hierarchy. In this study, computerized adaptive testing (CAT) algorithms based on these kinds of models were implemented, and their performance under a variety of…
Descriptors: Computer Assisted Testing, Adaptive Testing, Item Response Theory, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Kang, Taehoon; Cohen, Allan S.; Sung, Hyun-Jung – Applied Psychological Measurement, 2009
This study examines the utility of four indices for use in model selection with nested and nonnested polytomous item response theory (IRT) models: a cross-validation index and three information-based indices. Four commonly used polytomous IRT models are considered: the graded response model, the generalized partial credit model, the partial credit…
Descriptors: Item Response Theory, Models, Selection, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Cheng, Ying; Chang, Hua-Hua; Douglas, Jeffrey; Guo, Fanmin – Educational and Psychological Measurement, 2009
a-stratification is a method that utilizes items with small discrimination (a) parameters early in an exam and those with higher a values when more is learned about the ability parameter. It can achieve much better item usage than the maximum information criterion (MIC). To make a-stratification more practical and more widely applicable, a method…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Selection
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Moses, Tim; Holland, Paul – ETS Research Report Series, 2008
This study addressed 2 issues of using loglinear models for smoothing univariate test score distributions and for enhancing the stability of equipercentile equating functions. One issue was a comparative assessment of several statistical strategies that have been proposed for selecting 1 from several competing model parameterizations. Another…
Descriptors: Equated Scores, Selection, Models, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Chater, Nick; Vlaev, Ivo; Grinberg, Maurice – Journal of Experimental Psychology: General, 2008
Theories of choice in economics typically assume that interacting agents act individualistically and maximize their own utility. Specifically, game theory proposes that rational players should defect in one-shot prisoners' dilemmas (PD). Defection also appears to be the inevitable outcome for agents who learn by reinforcement of past choices,…
Descriptors: Game Theory, Cooperation, Selection, Reinforcement
Peer reviewed Peer reviewed
Direct linkDirect link
Marewski, Julian N.; Schooler, Lael J. – Psychological Review, 2011
How do people select among different strategies to accomplish a given task? Across disciplines, the strategy selection problem represents a major challenge. We propose a quantitative model that predicts how selection emerges through the interplay among strategies, cognitive capacities, and the environment. This interplay carves out for each…
Descriptors: Foreign Countries, Models, Familiarity, Holistic Approach
Previous Page | Next Page ยป
Pages: 1  |  2  |  3