NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
No Child Left Behind Act 20011
What Works Clearinghouse Rating
Showing 1 to 15 of 28 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Julia-Kim Walther; Martin Hecht; Steffen Zitzmann – Structural Equation Modeling: A Multidisciplinary Journal, 2025
Small sample sizes pose a severe threat to convergence and accuracy of between-group level parameter estimates in multilevel structural equation modeling (SEM). However, in certain situations, such as pilot studies or when populations are inherently small, increasing samples sizes is not feasible. As a remedy, we propose a two-stage regularized…
Descriptors: Sample Size, Hierarchical Linear Modeling, Structural Equation Models, Matrices
Peer reviewed Peer reviewed
Direct linkDirect link
Sotoudeh, Ramina; DiMaggio, Paul – Sociological Methods & Research, 2023
Sociologists increasingly face choices among competing algorithms that represent reasonable approaches to the same task, with little guidance in choosing among them. We develop a strategy that uses simulated data to identify the conditions under which different methods perform well and applies what is learned from the simulations to predict which…
Descriptors: Algorithms, Simulation, Prediction, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Sijia Huang; Dubravka Svetina Valdivia – Educational and Psychological Measurement, 2024
Identifying items with differential item functioning (DIF) in an assessment is a crucial step for achieving equitable measurement. One critical issue that has not been fully addressed with existing studies is how DIF items can be detected when data are multilevel. In the present study, we introduced a Lord's Wald X[superscript 2] test-based…
Descriptors: Item Analysis, Item Response Theory, Algorithms, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Goretzko, David – Educational and Psychological Measurement, 2022
Determining the number of factors in exploratory factor analysis is arguably the most crucial decision a researcher faces when conducting the analysis. While several simulation studies exist that compare various so-called factor retention criteria under different data conditions, little is known about the impact of missing data on this process.…
Descriptors: Factor Analysis, Research Problems, Data, Prediction
Peer reviewed Peer reviewed
Direct linkDirect link
Kucina, Talira; Sauer, James D.; Holt, Glenys A.; Brewer, Neil; Palmer, Matthew A. – Applied Cognitive Psychology, 2020
Presenting a blank line-up--containing only fillers--to witnesses prior to showing a real line-up might be useful for screening out those who pick from the blank line-up as unreliable witnesses. We show that the effectiveness of this procedure varies depending on instructions given to witnesses. Participants (N = 462) viewed a simulated crime and…
Descriptors: Recognition (Psychology), Simulation, Crime, Identification
Peer reviewed Peer reviewed
Direct linkDirect link
Kristin Porter; Luke Miratrix; Kristen Hunter – Society for Research on Educational Effectiveness, 2021
Background: Researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple testing procedures (MTPs)…
Descriptors: Statistical Analysis, Hypothesis Testing, Computer Software, Randomized Controlled Trials
Liceralde, Van Rynald T. – ProQuest LLC, 2021
When we read, errors in oculomotor programming can cause the eyes to land and fixate on different words from what the mind intended. Previous work suggests that these "mislocated fixations" form 10-30% of first-pass fixations in reading eye movement data, which presents theoretical and analytic issues for eyetracking-while-reading…
Descriptors: Eye Movements, Reading Processes, Error Patterns, Psychomotor Skills
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Saatcioglu, Fatima Munevver; Atar, Hakan Yavuz – International Journal of Assessment Tools in Education, 2022
This study aims to examine the effects of mixture item response theory (IRT) models on item parameter estimation and classification accuracy under different conditions. The manipulated variables of the simulation study are set as mixture IRT models (Rasch, 2PL, 3PL); sample size (600, 1000); the number of items (10, 30); the number of latent…
Descriptors: Accuracy, Classification, Item Response Theory, Programming Languages
Peer reviewed Peer reviewed
Direct linkDirect link
Mundorf, Abigail M. D.; Lazarus, Linh T. T.; Uitvlugt, Mitchell G.; Healey, M. Karl – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2021
The temporal contiguity effect (TCE) is the tendency for the recall of one event to cue recall of other events originally experienced nearby in time. Retrieved context theory proposes that the TCE results from fundamental properties of episodic memory: binding of events to a drifting context representation during encoding and the reinstatement of…
Descriptors: Incidental Learning, Correlation, Recall (Psychology), Cues
Bramley, Tom – Research Matters, 2020
The aim of this study was to compare, by simulation, the accuracy of mapping a cut-score from one test to another by expert judgement (using the Angoff method) versus the accuracy with a small-sample equating method (chained linear equating). As expected, the standard-setting method resulted in more accurate equating when we assumed a higher level…
Descriptors: Cutting Scores, Standard Setting (Scoring), Equated Scores, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Hedge, Craig; Powell, Georgina; Bompas, Aline; Sumner, Petroc – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2022
Response control or inhibition is one of the cornerstones of modern cognitive psychology, featuring prominently in theories of executive functioning and impulsive behavior. However, repeated failures to observe correlations between commonly applied tasks have led some theorists to question whether common response conflict processes even exist. A…
Descriptors: Individual Differences, Cognitive Ability, Cognitive Processes, Meta Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
de Zubicaray, Greig I.; Arciuli, Joanne; Kearney, Elaine; Guenther, Frank; McMahon, Katie L. – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2023
Grounded or embodied cognition research has employed body-object interaction (BOI; e.g., Pexman et al., 2019) ratings to investigate sensorimotor effects during language processing. We investigated relationships between BOI ratings and nonarbitrary statistical mappings between words' phonological forms and their syntactic category in English;…
Descriptors: Language Processing, Psychomotor Skills, English, Predictor Variables
Peer reviewed Peer reviewed
Direct linkDirect link
Aksu Dunya, Beyza – International Journal of Testing, 2018
This study was conducted to analyze potential item parameter drift (IPD) impact on person ability estimates and classification accuracy when drift affects an examinee subgroup. Using a series of simulations, three factors were manipulated: (a) percentage of IPD items in the CAT exam, (b) percentage of examinees affected by IPD, and (c) item pool…
Descriptors: Adaptive Testing, Classification, Accuracy, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Wooyeol; Cho, Sun-Joo – Applied Measurement in Education, 2017
Utilizing a longitudinal item response model, this study investigated the effect of item parameter drift (IPD) on item parameters and person scores via a Monte Carlo study. Item parameter recovery was investigated for various IPD patterns in terms of bias and root mean-square error (RMSE), and percentage of time the 95% confidence interval covered…
Descriptors: Item Response Theory, Test Items, Bias, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Woo-yeol; Cho, Sun-Joo – Journal of Educational Measurement, 2017
Cross-level invariance in a multilevel item response model can be investigated by testing whether the within-level item discriminations are equal to the between-level item discriminations. Testing the cross-level invariance assumption is important to understand constructs in multilevel data. However, in most multilevel item response model…
Descriptors: Test Items, Item Response Theory, Item Analysis, Simulation
Previous Page | Next Page ยป
Pages: 1  |  2