NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 8 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Moses, Tim; von Davier, Alina – Applied Psychological Measurement, 2011
Polynomial loglinear models for one-, two-, and higher-way contingency tables have important applications to measurement and assessment. They are essentially regarded as a smoothing technique, which is commonly referred to as loglinear smoothing. A SAS IML (SAS Institute, 2002a) macro was created to implement loglinear smoothing according to…
Descriptors: Statistical Analysis, Computer Software, Algebra, Mathematical Formulas
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Tsai-Wei – Educational Technology & Society, 2012
The study compared the aberrance detection powers of the BW person-fit indices with other group-based indices (SCI, MCI, NCI, and Wc&Bs) and item response theory based (IRT-based) indices (OUTFITz, INFITz, ECI2z, ECI4z, and lz). Four kinds of comparative conditions, including content category (CC), types of aberrance (AT), severity of…
Descriptors: Item Response Theory, Comparative Analysis, Effect Size, Probability
Peer reviewed Peer reviewed
Direct linkDirect link
Sinharay, Sandip; Holland, Paul W. – Journal of Educational Measurement, 2010
The nonequivalent groups with anchor test (NEAT) design involves missing data that are missing by design. Three equating methods that can be used with a NEAT design are the frequency estimation equipercentile equating method, the chain equipercentile equating method, and the item-response-theory observed-score-equating method. We suggest an…
Descriptors: Equated Scores, Item Response Theory, Comparative Analysis, Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Bai, Haiyan – International Journal of Research & Method in Education, 2011
Propensity score matching (PSM) has become a popular approach for research studies when randomization is infeasible. However, there are significant differences in the effectiveness of selection bias reduction among the existing PSM methods and, therefore, it is challenging for researchers to select an appropriate matching method. This current…
Descriptors: Research Methodology, Researchers, Comparative Analysis, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Hilbig, Benjamin E.; Erdfelder, Edgar; Pohl, Rudiger F. – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2011
A new process model of the interplay between memory and judgment processes was recently suggested, assuming that retrieval fluency--that is, the speed with which objects are recognized--will determine inferences concerning such objects in a single-cue fashion. This aspect of the fluency heuristic, an extension of the recognition heuristic, has…
Descriptors: Stimuli, Heuristics, Memory, Goodness of Fit
Peer reviewed Peer reviewed
Direct linkDirect link
Dunst, Carl J.; Hamby, Deborah W. – Journal of Intellectual & Developmental Disability, 2012
This paper includes a nontechnical description of methods for calculating effect sizes in intellectual and developmental disability studies. Different hypothetical studies are used to illustrate how null hypothesis significance testing (NHST) and effect size findings can result in quite different outcomes and therefore conflicting results. Whereas…
Descriptors: Intervals, Developmental Disabilities, Statistical Significance, Effect Size
Peer reviewed Peer reviewed
Direct linkDirect link
Thomason-Sassi, Jessica L.; Iwata, Brian A.; Neidert, Pamela L.; Roscoe, Eileen M. – Journal of Applied Behavior Analysis, 2011
Dependent variables in research on problem behavior typically are based on measures of response repetition, but these measures may be problematic when behavior poses high risk or when its occurrence terminates a session. We examined response latency as the index of behavior during assessment. In Experiment 1, we compared response rate and latency…
Descriptors: Behavior Problems, Reaction Time, Functional Behavioral Assessment, Experiments
Peer reviewed Peer reviewed
Direct linkDirect link
Thompson, Nathan A. – Practical Assessment, Research & Evaluation, 2011
Computerized classification testing (CCT) is an approach to designing tests with intelligent algorithms, similar to adaptive testing, but specifically designed for the purpose of classifying examinees into categories such as "pass" and "fail." Like adaptive testing for point estimation of ability, the key component is the…
Descriptors: Adaptive Testing, Computer Assisted Testing, Classification, Probability