NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 586 to 600 of 3,711 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Jacobs, Katrina Bartow – Teachers College Record, 2019
Background/Context: Issues of policy, practice, and assessment and the relationships between them have been a persistent focus in the practice and research of teacher preparation. However, the field has also long appreciated the tensions that persist between assessment approaches espoused in most teacher education programs and the realities of…
Descriptors: Elementary Secondary Education, Teacher Education Programs, Teacher Competencies, Preservice Teacher Education
Peer reviewed Peer reviewed
Direct linkDirect link
Baghaei, Purya; Ravand, Hamdollah – SAGE Open, 2019
In many reading comprehension tests, different test formats are employed. Two commonly used test formats to measure reading comprehension are sustained passages followed by some questions and cloze items. Individual differences in handling test format peculiarities could constitute a source of score variance. In this study, a bifactor Rasch model…
Descriptors: Cloze Procedure, Test Bias, Individual Differences, Difficulty Level
Li, Sylvia; Meyer, Patrick – NWEA, 2019
This simulation study examines the measurement precision, item exposure rates, and the depth of the MAP® Growth™ item pools under various grade-level restrictions. Unlike most summative assessments, MAP Growth allows examinees to see items from any grade level, regardless of the examinee's actual grade level. It does not limit the test to items…
Descriptors: Achievement Tests, Item Banks, Test Items, Instructional Program Divisions
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ardic, Elif Ozlem; Gelbal, Selahattin – Eurasian Journal of Educational Research, 2017
Purpose: The aim of this study was to examine measurement invariance of the interest and motivation related items contained in the PISA 2012 student survey with regard to gender school type and statistical regions and to identify the items that show differential item functioning (DIF) across groups. Research Methods: Multiple-group confirmatory…
Descriptors: Foreign Countries, Achievement Tests, International Assessment, Secondary School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Marbach, Joshua – Journal of Psychoeducational Assessment, 2017
The Mathematics Fluency and Calculation Tests (MFaCTs) are a series of measures designed to assess for arithmetic calculation skills and calculation fluency in children ages 6 through 18. There are five main purposes of the MFaCTs: (1) identifying students who are behind in basic math fact automaticity; (2) evaluating possible delays in arithmetic…
Descriptors: Mathematics Tests, Computation, Mathematics Skills, Arithmetic
Peer reviewed Peer reviewed
Direct linkDirect link
Castellano, Katherine E.; McCaffrey, Daniel F. – Educational Measurement: Issues and Practice, 2017
Mean or median student growth percentiles (MGPs) are a popular measure of educator performance, but they lack rigorous evaluation. This study investigates the error in MGP due to test score measurement error (ME). Using analytic derivations, we find that errors in the commonly used MGP are correlated with average prior latent achievement: Teachers…
Descriptors: Teacher Evaluation, Teacher Effectiveness, Value Added Models, Achievement Gains
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kabasakal, Kübra Atalay; Kelecioglu, Hülya – Educational Sciences: Theory and Practice, 2015
This study examines the effect of differential item functioning (DIF) items on test equating through multilevel item response models (MIRMs) and traditional IRMs. The performances of three different equating models were investigated under 24 different simulation conditions, and the variables whose effects were examined included sample size, test…
Descriptors: Test Bias, Equated Scores, Item Response Theory, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Egberink, Iris J. L.; Meijer, Rob R.; Tendeiro, Jorge N. – Educational and Psychological Measurement, 2015
A popular method to assess measurement invariance of a particular item is based on likelihood ratio tests with all other items as anchor items. The results of this method are often only reported in terms of statistical significance, and researchers proposed different methods to empirically select anchor items. It is unclear, however, how many…
Descriptors: Personality Measures, Computer Assisted Testing, Measurement, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Suh, Youngsuk; Talley, Anna E. – Applied Measurement in Education, 2015
This study compared and illustrated four differential distractor functioning (DDF) detection methods for analyzing multiple-choice items. The log-linear approach, two item response theory-model-based approaches with likelihood ratio tests, and the odds ratio approach were compared to examine the congruence among the four DDF detection methods.…
Descriptors: Test Bias, Multiple Choice Tests, Test Items, Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Baghaei, Purya; Kubinger, Klaus D. – Practical Assessment, Research & Evaluation, 2015
The present paper gives a general introduction to the linear logistic test model (Fischer, 1973), an extension of the Rasch model with linear constraints on item parameters, along with eRm (an R package to estimate different types of Rasch models; Mair, Hatzinger, & Mair, 2014) functions to estimate the model and interpret its parameters. The…
Descriptors: Item Response Theory, Models, Test Validity, Hypothesis Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Orr, Margaret Terry; Pecheone, Ray; Snyder, Jon D.; Murphy, Joseph; Palanki, Ameetha; Beaudin, Barbara; Hollingworth, Liz; Buttram, Joan L. – Journal of Research on Leadership Education, 2018
This article presents the validity bias review feedback and outcomes of new performance-based assessments to evaluate candidates seeking principal licensure. Until now, there has been little empirical work on performance assessment for principal licensure. One state recently developed a multi-task performance assessment for leaders and has…
Descriptors: Performance Based Assessment, Licensing Examinations (Professions), Principals, Evidence
Peer reviewed Peer reviewed
Direct linkDirect link
Oliveri, María Elena; Ercikan, Kadriye; Zumbo, Bruno D. – Applied Measurement in Education, 2014
Heterogeneity within English language learners (ELLs) groups has been documented. Previous research on differential item functioning (DIF) analyses suggests that accurate DIF detection rates are reduced greatly when groups are heterogeneous. In this simulation study, we investigated the effects of heterogeneity within linguistic (ELL) groups on…
Descriptors: Test Bias, Accuracy, English Language Learners, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Hidalgo, Mª Dolores; Gómez-Benito, Juana; Zumbo, Bruno D. – Educational and Psychological Measurement, 2014
The authors analyze the effectiveness of the R[superscript 2] and delta log odds ratio effect size measures when using logistic regression analysis to detect differential item functioning (DIF) in dichotomous items. A simulation study was carried out, and the Type I error rate and power estimates under conditions in which only statistical testing…
Descriptors: Regression (Statistics), Test Bias, Effect Size, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Carvajal-Espinoza, Jorge; Welch, Greg W. – Online Submission, 2016
When tests are translated into one or more languages, the question of the equivalence of items across language forms arises. This equivalence can be assessed at the scale level by means of a multiple group confirmatory factor analysis (CFA) in the context of structural equation modeling. This study examined the measurement equivalence of a Spanish…
Descriptors: Translation, Spanish, English, Mathematics Tests
Fernandes, Amanda Careena – ProQuest LLC, 2016
Assessment is an integral part of learning as it is used to gather information about a test-taker. Those in the field of academia, such as educational policy makers, instructors, and administrators are able to use information gathered from tests to further instruction and learning decisions (Baker, 2006; Drianna, 2007; Kasper & Ross, 2013;…
Descriptors: Foreign Countries, Test Bias, Sex Fairness, English (Second Language)
Pages: 1  |  ...  |  36  |  37  |  38  |  39  |  40  |  41  |  42  |  43  |  44  |  ...  |  248