NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 46 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kartianom Kartianom; Heri Retnawati; Kana Hidayati – Journal of Pedagogical Research, 2024
Conducting a fair test is important for educational research. Unfair assessments can lead to gender disparities in academic achievement, ultimately resulting in disparities in opportunities, wages, and career choice. Differential Item Function [DIF] analysis is presented to provide evidence of whether the test is truly fair, where it does not harm…
Descriptors: Foreign Countries, Test Bias, Item Response Theory, Test Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Baghaei, Purya; Christensen, Karl Bang – Language Testing, 2023
C-tests are gap-filling tests mainly used as rough and economical measures of second-language proficiency for placement and research purposes. A C-test usually consists of several short independent passages where the second half of every other word is deleted. Owing to their interdependent structure, C-test items violate the local independence…
Descriptors: Item Response Theory, Language Tests, Language Proficiency, Second Language Learning
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Mimi Ismail; Ahmed Al - Badri; Said Al - Senaidi – Journal of Education and e-Learning Research, 2025
This study aimed to reveal the differences in individuals' abilities, their standard errors, and the psychometric properties of the test according to the two methods of applying the test (electronic and paper). The descriptive approach was used to achieve the study's objectives. The study sample consisted of 74 male and female students at the…
Descriptors: Achievement Tests, Computer Assisted Testing, Psychometrics, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
H. Cigdem Bulut; Okan Bulut; Ashley Clelland – Field Methods, 2025
In this study, we explored psychometric network analysis (PNA) as an alternative method for identifying item wording effects in self-report instruments. We examined the functioning of negatively worded items in the network structures of two math-related scales from the 2019 Trends in International Mathematics and Science Study (TIMSS); Students…
Descriptors: Psychometrics, Network Analysis, Identification, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Qi Huang; Daniel M. Bolt; Weicong Lyu – Large-scale Assessments in Education, 2024
Large scale international assessments depend on invariance of measurement across countries. An important consideration when observing cross-national differential item functioning (DIF) is whether the DIF actually reflects a source of bias, or might instead be a methodological artifact reflecting item response theory (IRT) model misspecification.…
Descriptors: Test Items, Item Response Theory, Test Bias, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Joo, Seang-Hwane; Khorramdel, Lale; Yamamoto, Kentaro; Shin, Hyo Jeong; Robin, Frederic – Educational Measurement: Issues and Practice, 2021
In Programme for International Student Assessment (PISA), item response theory (IRT) scaling is used to examine the psychometric properties of items and scales and to provide comparable test scores across participating countries and over time. To balance the comparability of IRT item parameter estimations across countries with the best possible…
Descriptors: Foreign Countries, International Assessment, Achievement Tests, Secondary School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Chengyu Cui; Chun Wang; Gongjun Xu – Grantee Submission, 2024
Multidimensional item response theory (MIRT) models have generated increasing interest in the psychometrics literature. Efficient approaches for estimating MIRT models with dichotomous responses have been developed, but constructing an equally efficient and robust algorithm for polytomous models has received limited attention. To address this gap,…
Descriptors: Item Response Theory, Accuracy, Simulation, Psychometrics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Cascella, Clelia; Giberti, Chiara; Bolondi, Giorgio – Education Sciences, 2021
This study is aimed at exploring how different formulations of the same mathematical item may influence students' answers, and whether or not boys and girls are equally affected by differences in presentation. An experimental design was employed: the same stem-items (i.e., items with the same mathematical content and question intent) were…
Descriptors: Mathematics Achievement, Mathematics Tests, Achievement Tests, Scores
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Toker, Turker; Green, Kathy – International Journal of Assessment Tools in Education, 2021
This study provides a comparison of the results of latent class analysis (LCA) and mixture Rasch model (MRM) analysis using data from the Trends in International Mathematics and Science Study -- 2011 (TIMSS-2011) with a focus on the 8th-grade mathematics section. The research study focuses on the comparison of LCA and MRM to determine if results…
Descriptors: Multivariate Analysis, Structural Equation Models, Item Response Theory, Achievement Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ezechukwu, Roseline Ifeoma; Chinecherem, Basil; Oguguo, E.; Ene, Catherine U.; Ugorji, Clifford O. – World Journal of Education, 2020
This study determined the psychometric properties of the Economics Achievement Test (EAT) using Item Response Theory (IRT). Two popular IRT models namely, one-parameter logistics (1PL) and two-parameter logistics (2PL) models were utilized. The researcher adopted instrumentation research design. Four research questions and two hypotheses were…
Descriptors: Economics Education, Economics, Achievement Tests, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Petscher, Yaacov; Pfeiffer, Steven I. – Assessment for Effective Intervention, 2020
The authors evaluated measurement-level, factor-level, item-level, and scale-level revisions to the "Gifted Rating Scales-School Form" (GRS-S). Measurement-level considerations tested the extent to which treating the Likert-type scale rating as categorical or continuous produced different fit across unidimensional, correlated trait, and…
Descriptors: Psychometrics, Academically Gifted, Rating Scales, Factor Structure
Peer reviewed Peer reviewed
Direct linkDirect link
Yildirim, Hüseyin H. – Educational Assessment, Evaluation and Accountability, 2021
From a sociocognitive perspective, item parameters in a test represent regularities in examinees' item responses. These regularities are originated from shared experiences among individuals in interacting with their environment. Theories explaining the relationship between culture and cognition also acknowledge these shared experiences as the…
Descriptors: Educational Assessment, Test Items, Item Response Theory, Psychometrics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Dogan, Enis – Practical Assessment, Research & Evaluation, 2018
Several large scale assessments include student, teacher, and school background questionnaires. Results from such questionnaires can be reported for each item separately, or as indices based on aggregation of multiple items into a scale. Interpreting scale scores is not always an easy task though. In disseminating results of achievement tests, one…
Descriptors: Rating Scales, Benchmarking, Questionnaires, Achievement Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Jerrim, John; Parker, Philip; Choi, Alvaro; Chmielewski, Anna Katyn; Sälzer, Christine; Shure, Nikki – Educational Measurement: Issues and Practice, 2018
The Programme for International Student Assessment (PISA) is an important international study of 15-olds' knowledge and skills. New results are released every 3 years, and have a substantial impact upon education policy. Yet, despite its influence, the methodology underpinning PISA has received significant criticism. Much of this criticism has…
Descriptors: Educational Assessment, Comparative Education, Achievement Tests, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Cantley, Ian – Educational Philosophy and Theory, 2019
Mathematics achievement in different education systems around the world is assessed periodically in the Programme for International Student Assessment (PISA). PISA is deemed to yield robust international comparisons of mathematical attainment that enable individual countries and regions to monitor the performance of their education systems…
Descriptors: Mathematics Achievement, Achievement Tests, Foreign Countries, International Assessment
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4