NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 33 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Zheng, Xiaying; Yang, Ji Seung – Measurement: Interdisciplinary Research and Perspectives, 2021
The purpose of this paper is to briefly introduce two most common applications of multiple group item response theory (IRT) models, namely detecting differential item functioning (DIF) analysis and nonequivalent group score linking with a simultaneous calibration. We illustrate how to conduct those analyses using the "Stata" item…
Descriptors: Item Response Theory, Test Bias, Computer Software, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Luo, Yong – Educational and Psychological Measurement, 2018
Mplus is a powerful latent variable modeling software program that has become an increasingly popular choice for fitting complex item response theory models. In this short note, we demonstrate that the two-parameter logistic testlet model can be estimated as a constrained bifactor model in Mplus with three estimators encompassing limited- and…
Descriptors: Computer Software, Models, Statistical Analysis, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Yang, Ji Seung; Zheng, Xiaying – Journal of Educational and Behavioral Statistics, 2018
The purpose of this article is to introduce and review the capability and performance of the Stata item response theory (IRT) package that is available from Stata v.14, 2015. Using a simulated data set and a publicly available item response data set extracted from Programme of International Student Assessment, we review the IRT package from…
Descriptors: Item Response Theory, Item Analysis, Computer Software, Statistical Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Karlin, Omar; Karlin, Sayaka – InSight: A Journal of Scholarly Teaching, 2018
This study had two aims. The first was to explain the process of using the Rasch measurement model to validate tests in an easy-to-understand way for those unfamiliar with the Rasch measurement model. The second was to validate two final exams with several shared items. The exams were given to two groups of students with slightly differing English…
Descriptors: Item Response Theory, Test Validity, Test Items, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Boone, William J.; Noltemeyer, Amity – Cogent Education, 2017
In order to progress as a field, school psychology research must be informed by effective measurement techniques. One approach to address the need for careful measurement is Rasch analysis. This technique can (a) facilitate the development of instruments that provide useful data, (b) provide data that can be used confidently for both descriptive…
Descriptors: Item Response Theory, School Psychology, School Psychologists, Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Debeer, Dries; Janssen, Rianne; De Boeck, Paul – Journal of Educational Measurement, 2017
When dealing with missing responses, two types of omissions can be discerned: items can be skipped or not reached by the test taker. When the occurrence of these omissions is related to the proficiency process the missingness is nonignorable. The purpose of this article is to present a tree-based IRT framework for modeling responses and omissions…
Descriptors: Item Response Theory, Test Items, Responses, Testing Problems
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Marcoulides, George A.; Lee, Chun-Lung; Chang, Chi – Educational and Psychological Measurement, 2013
This note is concerned with a latent variable modeling approach for the study of differential item functioning in a multigroup setting. A multiple-testing procedure that can be used to evaluate group differences in response probabilities on individual items is discussed. The method is readily employed when the aim is also to locate possible…
Descriptors: Test Bias, Statistical Analysis, Models, Hypothesis Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Demir, Mevhibe Kobak; Gür, Hülya – Educational Research and Reviews, 2016
This study was aimed to develop a valid and reliable perception scale in order to determine the perceptions of pre-service teachers towards the use of WebQuest in mathematics teaching. The study was conducted with 115 junior and senior pre-service teachers at Balikesir University's Faculty of Education, Computer Education and Instructional…
Descriptors: Foreign Countries, Attitude Measures, Likert Scales, Test Construction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Adeleke, A. A.; Joshua, E. O. – Journal of Education and Practice, 2015
Physics literacy plays a crucial part in global technological development as several aspects of science and technology apply concepts and principles of physics in their operations. However, the acquisition of scientific literacy in physics in our society today is not encouraging enough to the desirable standard. Therefore, this study focuses on…
Descriptors: Physics, Secondary School Students, Scientific Literacy, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Zufferey, Sandrine; Gygax, Pascal M. – Discourse Processes: A multidisciplinary journal, 2016
Previous research has suggested that some discourse relations are easier to convey implicitly than others due to cognitive biases in the interpretation of discourse. In this article we argue that relations involving a perspective shift, such as confirmation relations, are difficult to convey implicitly. We assess this claim with two empirical…
Descriptors: Role, Perspective Taking, Discourse Analysis, French
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ahmed, Tamim; Hanif, Maria – Journal of Education and Practice, 2016
This study is intended to investigate student's achievement capability among two families i.e. Low and High income families and designed for primary level learners. A Reading, Arithmetic and Writing (RAW) Achievement test that was developed as a part of another research study (Tamim Ahmed Khan, 2015) was adopted for this study. Both English medium…
Descriptors: Low Income, Performance Based Assessment, Elementary School Students, Achievement Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ashwell, Tim; Elam, Jesse R. – JALT CALL Journal, 2017
The ultimate aim of our research project was to use the Google Web Speech API to automate scoring of elicited imitation (EI) tests. However, in order to achieve this goal, we had to take a number of preparatory steps. We needed to assess how accurate this speech recognition tool is in recognizing native speakers' production of the test items; we…
Descriptors: English (Second Language), Second Language Learning, Second Language Instruction, Language Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Adedokun, Omolola A.; Burgess, Wilella D. – Journal of MultiDisciplinary Evaluation, 2012
Background: Although McNemar Test is the most appropriate tool for analyzing pre-post differences in dichotomous items (e.g., "yes" or "no", "correct" or "incorrect", etc.), many scholars have noted the inappropriate use of Pearson's Chi-square Test by researchers, including social scientists and evaluators,…
Descriptors: Statistical Analysis, Test Items, Pretests Posttests, Hypothesis Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Marcoulides, George A. – Structural Equation Modeling: A Multidisciplinary Journal, 2011
A directly applicable latent variable modeling procedure for classical item analysis is outlined. The method allows one to point and interval estimate item difficulty, item correlations, and item-total correlations for composites consisting of categorical items. The approach is readily employed in empirical research and as a by-product permits…
Descriptors: Item Analysis, Evaluation, Correlation, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Padilla, Jose Luis; Hidalgo, M. Dolores; Benitez, Isabel; Gomez-Benito, Juana – Psicologica: International Journal of Methodology and Experimental Psychology, 2012
The analysis of differential item functioning (DIF) examines whether item responses differ according to characteristics such as language and ethnicity, when people with matching ability levels respond differently to the items. This analysis can be performed by calculating various statistics, one of the most important being the Mantel-Haenszel,…
Descriptors: Foreign Countries, Test Bias, Computer Software, Computer Software Evaluation
Previous Page | Next Page »
Pages: 1  |  2  |  3