NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)1
Since 2016 (last 10 years)4
Since 2006 (last 20 years)31
Audience
Teachers1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 31 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Chen, Chia-Wen; Wang, Wen-Chung; Chiu, Ming Ming; Ro, Sage – Journal of Educational Measurement, 2020
The use of computerized adaptive testing algorithms for ranking items (e.g., college preferences, career choices) involves two major challenges: unacceptably high computation times (selecting from a large item pool with many dimensions) and biased results (enhanced preferences or intensified examinee responses because of repeated statements across…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Selection
Peer reviewed Peer reviewed
Direct linkDirect link
Gómez Galindo, Alma Adrianna; González Galli, Leonardo; García Franco, Alejandra – Journal of Biological Education, 2021
In this paper, we present a simulation of artificial selection of maize that can be used as a bridging case for the subsequent introduction of natural selection in school. The proposed simulation takes up essential biological elements but also has a cultural meaning for the inhabitants of some regions of Latin America. After implementing a test of…
Descriptors: Science Instruction, Biology, Evolution, Simulation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Keller, Bryan; Chen, Jianshen – Society for Research on Educational Effectiveness, 2016
Observational studies are common in educational research, where subjects self-select or are otherwise non-randomly assigned to different interventions (e.g., educational programs, grade retention, special education). Unbiased estimation of a causal effect with observational data depends crucially on the assumption of ignorability, which specifies…
Descriptors: Computation, Influences, Observation, Data
Carroll, Ian A. – ProQuest LLC, 2017
Item exposure control is, relative to adaptive testing, a nascent concept that has emerged only in the last two to three decades on an academic basis as a practical issue in high-stakes computerized adaptive tests. This study aims to implement a new strategy in item exposure control by incorporating the standard error of the ability estimate into…
Descriptors: Test Items, Computer Assisted Testing, Selection, Adaptive Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Steiner, Peter M.; Kim, Jee-Seon – Society for Research on Educational Effectiveness, 2015
Despite the popularity of propensity score (PS) techniques they are not yet well studied for matching multilevel data where selection into treatment takes place among level-one units within clusters. This paper suggests a PS matching strategy that tries to avoid the disadvantages of within- and across-cluster matching. The idea is to first…
Descriptors: Computation, Outcomes of Treatment, Multivariate Analysis, Probability
Aimee Ladreyt Badeaux – ProQuest LLC, 2015
The purpose of this science education study is to explore visual cognition and eye tracking during medication selection in the student nurse anesthetist (first year and second year students) and the expert nurse anesthetist. The first phase of this study consisted of the selection of a specific medication (target) from an array of medications via…
Descriptors: Science Education, Nursing Students, Anesthesiology, Drug Therapy
Peer reviewed Peer reviewed
Direct linkDirect link
Bar-Hillel, Maya; Peer, Eyal; Acquisti, Alessandro – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2014
When asked to mentally simulate coin tosses, people generate sequences that differ systematically from those generated by fair coins. It has been rarely noted that this divergence is apparent already in the very 1st mental toss. Analysis of several existing data sets reveals that about 80% of respondents start their sequence with Heads. We…
Descriptors: Bias, Selection, Cognitive Processes, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Kopf, Julia; Zeileis, Achim; Strobl, Carolin – Educational and Psychological Measurement, 2015
Differential item functioning (DIF) indicates the violation of the invariance assumption, for instance, in models based on item response theory (IRT). For item-wise DIF analysis using IRT, a common metric for the item parameters of the groups that are to be compared (e.g., for the reference and the focal group) is necessary. In the Rasch model,…
Descriptors: Test Items, Equated Scores, Test Bias, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Steiner, Peter M.; Cook, Thomas D.; Li, Wei; Clark, M. H. – Journal of Research on Educational Effectiveness, 2015
In observational studies, selection bias will be completely removed only if the selection mechanism is ignorable, namely, all confounders of treatment selection and potential outcomes are reliably measured. Ideally, well-grounded substantive theories about the selection process and outcome-generating model are used to generate the sample of…
Descriptors: Quasiexperimental Design, Bias, Selection, Observation
Peer reviewed Peer reviewed
Direct linkDirect link
Seo, Dong Gi; Weiss, David J. – Educational and Psychological Measurement, 2015
Most computerized adaptive tests (CATs) have been studied using the framework of unidimensional item response theory. However, many psychological variables are multidimensional and might benefit from using a multidimensional approach to CATs. This study investigated the accuracy, fidelity, and efficiency of a fully multidimensional CAT algorithm…
Descriptors: Computer Assisted Testing, Adaptive Testing, Accuracy, Fidelity
Peer reviewed Peer reviewed
Direct linkDirect link
Yao, Lihua – Applied Psychological Measurement, 2013
Through simulated data, five multidimensional computerized adaptive testing (MCAT) selection procedures with varying test lengths are examined and compared using different stopping rules. Fixed item exposure rates are used for all the items, and the Priority Index (PI) method is used for the content constraints. Two stopping rules, standard error…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Selection
Peer reviewed Peer reviewed
Direct linkDirect link
Chen, Pei-Hua; Chang, Hua-Hua; Wu, Haiyan – Educational and Psychological Measurement, 2012
Two sampling-and-classification-based procedures were developed for automated test assembly: the Cell Only and the Cell and Cube methods. A simulation study based on a 540-item bank was conducted to compare the performance of the procedures with the performance of a mixed-integer programming (MIP) method for assembling multiple parallel test…
Descriptors: Test Items, Selection, Test Construction, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Wen-Chung; Jin, Kuan-Yu; Qiu, Xue-Lan; Wang, Lei – Journal of Educational Measurement, 2012
In some tests, examinees are required to choose a fixed number of items from a set of given items to answer. This practice creates a challenge to standard item response models, because more capable examinees may have an advantage by making wiser choices. In this study, we developed a new class of item response models to account for the choice…
Descriptors: Item Response Theory, Test Items, Selection, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Hung-Yu; Chen, Po-Hsi; Wang, Wen-Chung – Applied Psychological Measurement, 2012
In the human sciences, a common assumption is that latent traits have a hierarchical structure. Higher order item response theory models have been developed to account for this hierarchy. In this study, computerized adaptive testing (CAT) algorithms based on these kinds of models were implemented, and their performance under a variety of…
Descriptors: Computer Assisted Testing, Adaptive Testing, Item Response Theory, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Ghaffarzadegan, Navid; Stewart, Thomas R. – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2011
Elwin, Juslin, Olsson, and Enkvist (2007) and Henriksson, Elwin, and Juslin (2010) offered the constructivist coding hypothesis to describe how people code the outcomes of their decisions when availability of feedback is conditional on the decision. They provided empirical evidence only for the 0.5 base rate condition. This commentary argues that…
Descriptors: Decision Making, Feedback (Response), Constructivism (Learning), Hypothesis Testing
Previous Page | Next Page »
Pages: 1  |  2  |  3