NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 109 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Diaz, Emily; Brooks, Gordon; Johanson, George – International Journal of Assessment Tools in Education, 2021
This Monte Carlo study assessed Type I error in differential item functioning analyses using Lord's chi-square (LC), Likelihood Ratio Test (LRT), and Mantel-Haenszel (MH) procedure. Two research interests were investigated: item response theory (IRT) model specification in LC and the LRT and continuity correction in the MH procedure. This study…
Descriptors: Test Bias, Item Response Theory, Statistical Analysis, Comparative Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sirganci, Gozde; Uyumaz, Gizem; Yandi, Alperen – International Journal of Assessment Tools in Education, 2020
It is necessary to examine the measurement invariance (MI) among groups in studies where different groups are compared by using a measurement instrument. Most of the studies, measurement invariance is tested with multiple group confirmatory factor analysis. This model applies many model adjustments based on the modification indexes. Therefore, it…
Descriptors: Foreign Countries, Achievement Tests, International Assessment, Secondary School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Lenhard, Wolfgang; Lenhard, Alexandra – Educational and Psychological Measurement, 2021
The interpretation of psychometric test results is usually based on norm scores. We compared semiparametric continuous norming (SPCN) with conventional norming methods by simulating results for test scales with different item numbers and difficulties via an item response theory approach. Subsequently, we modeled the norm scores based on random…
Descriptors: Test Norms, Scores, Regression (Statistics), Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Önen, Emine – Universal Journal of Educational Research, 2019
This simulation study was conducted to compare the performances of Frequentist and Bayesian approaches in the context of power to detect model misspecification in terms of omitted cross-loading in CFA models with respect to the several variables (number of omitted cross-loading, magnitude of main loading, number of factors, number of indicators…
Descriptors: Factor Analysis, Bayesian Statistics, Comparative Analysis, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Desjardins, Christopher David – Journal of Experimental Education, 2016
The purpose of this article is to develop a statistical model that best explains variability in the number of school days suspended. Number of school days suspended is a count variable that may be zero-inflated and overdispersed relative to a Poisson model. Four models were examined: Poisson, negative binomial, Poisson hurdle, and negative…
Descriptors: Suspension, Statistical Analysis, Models, Data
Peer reviewed Peer reviewed
Direct linkDirect link
Zeller, Florian; Krampen, Dorothea; Reiß, Siegbert; Schweizer, Karl – Educational and Psychological Measurement, 2017
The item-position effect describes how an item's position within a test, that is, the number of previous completed items, affects the response to this item. Previously, this effect was represented by constraints reflecting simple courses, for example, a linear increase. Due to the inflexibility of these representations our aim was to examine…
Descriptors: Goodness of Fit, Simulation, Factor Analysis, Intelligence Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Mawdsley, David; Higgins, Julian P. T.; Sutton, Alex J.; Abrams, Keith R. – Research Synthesis Methods, 2017
In meta-analysis, the random-effects model is often used to account for heterogeneity. The model assumes that heterogeneity has an additive effect on the variance of effect sizes. An alternative model, which assumes multiplicative heterogeneity, has been little used in the medical statistics community, but is widely used by particle physicists. In…
Descriptors: Databases, Meta Analysis, Goodness of Fit, Effect Size
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Boedeker, Peter – Practical Assessment, Research & Evaluation, 2017
Hierarchical linear modeling (HLM) is a useful tool when analyzing data collected from groups. There are many decisions to be made when constructing and estimating a model in HLM including which estimation technique to use. Three of the estimation techniques available when analyzing data with HLM are maximum likelihood, restricted maximum…
Descriptors: Hierarchical Linear Modeling, Maximum Likelihood Statistics, Bayesian Statistics, Computation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sapmaz, Fatma; Totan, Tarik – Malaysian Online Journal of Educational Technology, 2018
The aim of this study is to model the happiness classification of university students--grouped as addicted, addiction risk, threshold and non-addicted to internet usage--with compatibility analysis on a map as happiness, average and unhappiness. The participants in this study were 400 university students from Turkey. According to the results of…
Descriptors: Foreign Countries, College Students, Addictive Behavior, At Risk Persons
Peer reviewed Peer reviewed
Direct linkDirect link
Lamprianou, Iasonas – Educational and Psychological Measurement, 2018
It is common practice for assessment programs to organize qualifying sessions during which the raters (often known as "markers" or "judges") demonstrate their consistency before operational rating commences. Because of the high-stakes nature of many rating activities, the research community tends to continuously explore new…
Descriptors: Social Networks, Network Analysis, Comparative Analysis, Innovation
Koziol, Natalie A.; Bovaird, James A. – Educational and Psychological Measurement, 2018
Evaluations of measurement invariance provide essential construct validity evidence--a prerequisite for seeking meaning in psychological and educational research and ensuring fair testing procedures in high-stakes settings. However, the quality of such evidence is partly dependent on the validity of the resulting statistical conclusions. Type I or…
Descriptors: Computation, Tests, Error of Measurement, Comparative Analysis
Franklin, Josette R. – ProQuest LLC, 2017
This quantitative research study analyzed archival data to determine if there was a significant difference in promotion rates from third to fourth grade between students in foster care who received one-to-one tutoring and those students in foster care who did not receive one-to-one tutoring over two school years. This study also analyzed student…
Descriptors: Grade 3, Foster Care, Tutoring, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Veas, Alejandro; Gilar, Raquel; Miñano, Pablo; Castejón, Juan Luis – Educational Studies, 2017
The present study, based on the construct comparability approach, performs a comparative analysis of general points average for seven courses, using exploratory factor analysis (EFA) and the Partial Credit model (PCM) with a sample of 1398 student subjects (M = 12.5, SD = 0.67) from 8 schools in the province of Alicante (Spain). EFA confirmed a…
Descriptors: Comparative Analysis, Grades (Scholastic), Compulsory Education, Secondary Education
Peer reviewed Peer reviewed
Direct linkDirect link
Sinharay, Sandip – Journal of Educational and Behavioral Statistics, 2016
Meijer and van Krimpen-Stoop noted that the number of person-fit statistics (PFSs) that have been designed for computerized adaptive tests (CATs) is relatively modest. This article partially addresses that concern by suggesting three new PFSs for CATs. The statistics are based on tests for a change point and can be used to detect an abrupt change…
Descriptors: Computer Assisted Testing, Adaptive Testing, Item Response Theory, Goodness of Fit
Peer reviewed Peer reviewed
Direct linkDirect link
Steedle, Jeffrey T.; Ferrara, Steve – Applied Measurement in Education, 2016
As an alternative to rubric scoring, comparative judgment generates essay scores by aggregating decisions about the relative quality of the essays. Comparative judgment eliminates certain scorer biases and potentially reduces training requirements, thereby allowing a large number of judges, including teachers, to participate in essay evaluation.…
Descriptors: Essays, Scoring, Comparative Analysis, Evaluators
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8