NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers1
Laws, Policies, & Programs
No Child Left Behind Act 20011
What Works Clearinghouse Rating
Showing 1 to 15 of 16 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Mahmut Sami Yigiter – Journal of Theoretical Educational Science, 2024
One of the main objectives of international large-scale assessments is to make comparisons between different countries, education policies, education systems, or subgroups. One of the main criteria for making comparisons between different groups is to ensure measurement invariance. The purpose of this study was to test the measurement invariance…
Descriptors: Mathematics, Mathematics Skills, Grade 4, Grade 8
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Ze – Large-scale Assessments in Education, 2022
In educational and psychological research, it is common to use latent factors to represent constructs and then to examine covariate effects on these latent factors. Using empirical data, this study applied three approaches to covariate effects on latent factors: the multiple-indicator multiple-cause (MIMIC) approach, multiple group confirmatory…
Descriptors: Comparative Analysis, Evaluation Methods, Grade 8, Mathematics Achievement
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Soysal, Sümeyra – Participatory Educational Research, 2023
Applying a measurement instrument developed in a specific country to other countries raise a critical and important question of interest in especially cross-cultural studies. Confirmatory factor analysis (CFA) is the most preferred and used method to examine the cross-cultural applicability of measurement tools. Although CFA is a sophisticated…
Descriptors: Generalization, Cross Cultural Studies, Measurement Techniques, Factor Analysis
Kritika Thapa – ProQuest LLC, 2023
Measurement invariance is crucial for making valid comparisons across different groups (Kline, 2016; Vandenberg, 2002). To address the challenges associated with invariance testing such as large sample size requirements, the complexity of the model, etc., applied researchers have incorporated parcels. Parcels have been shown to alleviate skewness,…
Descriptors: Elementary Secondary Education, Achievement Tests, Foreign Countries, International Assessment
Peer reviewed Peer reviewed
PDF on ERIC Download full text
ALKursheh, Taha Okleh; Al-zboon, Habis Saad; AlNasraween, Mo'en Salman – International Journal of Instruction, 2022
This study aimed at comparing the effect of two test item formats (multiple-choice and complete) on estimating person's ability, item parameters and the test information function (TIF).To achieve the aim of the study, two format of mathematics(1) test have been created: multiple-choice and complete, In its final format consisted of (31) items. The…
Descriptors: Comparative Analysis, Test Items, Item Response Theory, Test Format
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Dirlik, Ezgi Mor – International Journal of Progressive Education, 2019
Item response theory (IRT) has so many advantages than its precedent Classical Test Theory (CTT) such as non-changing item parameters, ability parameter estimations free from the items. However, in order to get these advantages, some assumptions should be met and they are; unidimensionality, normality and local independence. However, it is not…
Descriptors: Comparative Analysis, Nonparametric Statistics, Item Response Theory, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Moses, Tim – Educational Measurement: Issues and Practice, 2014
This module describes and extends X-to-Y regression measures that have been proposed for use in the assessment of X-to-Y scaling and equating results. Measures are developed that are similar to those based on prediction error in regression analyses but that are directly suited to interests in scaling and equating evaluations. The regression and…
Descriptors: Scaling, Regression (Statistics), Equated Scores, Comparative Analysis
Cho, Sun-Joo; Bottge, Brian A. – Grantee Submission, 2015
In a pretest-posttest cluster-randomized trial, one of the methods commonly used to detect an intervention effect involves controlling pre-test scores and other related covariates while estimating an intervention effect at post-test. In many applications in education, the total post-test and pre-test scores that ignores measurement error in the…
Descriptors: Item Response Theory, Hierarchical Linear Modeling, Pretests Posttests, Scores
Cho, Sun-Joo; Preacher, Kristopher J.; Bottge, Brian A. – Grantee Submission, 2015
Multilevel modeling (MLM) is frequently used to detect group differences, such as an intervention effect in a pre-test--post-test cluster-randomized design. Group differences on the post-test scores are detected by controlling for pre-test scores as a proxy variable for unobserved factors that predict future attributes. The pre-test and post-test…
Descriptors: Structural Equation Models, Hierarchical Linear Modeling, Intervention, Program Effectiveness
Peer reviewed Peer reviewed
Direct linkDirect link
Sachse, Karoline A.; Roppelt, Alexander; Haag, Nicole – Journal of Educational Measurement, 2016
Trend estimation in international comparative large-scale assessments relies on measurement invariance between countries. However, cross-national differential item functioning (DIF) has been repeatedly documented. We ran a simulation study using national item parameters, which required trends to be computed separately for each country, to compare…
Descriptors: Comparative Analysis, Measurement, Test Bias, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Paek, Insu; Park, Hyun-Jeong; Cai, Li; Chi, Eunlim – Educational and Psychological Measurement, 2014
Typically a longitudinal growth modeling based on item response theory (IRT) requires repeated measures data from a single group with the same test design. If operational or item exposure problems are present, the same test may not be employed to collect data for longitudinal analyses and tests at multiple time points are constructed with unique…
Descriptors: Item Response Theory, Comparative Analysis, Test Items, Equated Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Sideridis, Georgios D.; Tsaousis, Ioannis; Al-harbi, Khaleel A. – Journal of Psychoeducational Assessment, 2015
The purpose of the present study was to extend the model of measurement invariance by simultaneously estimating invariance across multiple populations in the dichotomous instrument case using multi-group confirmatory factor analytic and multiple indicator multiple causes (MIMIC) methodologies. Using the Arabic version of the General Aptitude Test…
Descriptors: Semitic Languages, Aptitude Tests, Error of Measurement, Factor Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Oh, Hyeonjoo; Moses, Tim – Journal of Educational Measurement, 2012
This study investigated differences between two approaches to chained equipercentile (CE) equating (one- and bi-direction CE equating) in nearly equal groups and relatively unequal groups. In one-direction CE equating, the new form is linked to the anchor in one sample of examinees and the anchor is linked to the reference form in the other…
Descriptors: Equated Scores, Statistical Analysis, Comparative Analysis, Differences
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Braun, Henry; Qian, Jiahe – ETS Research Report Series, 2008
This report describes the derivation and evaluation of a method for comparing the performance standards for public school students set by different states. It is based on an approach proposed by McLaughlin and associates, which constituted an innovative attempt to resolve the confusion and concern that occurs when very different proportions of…
Descriptors: State Standards, Comparative Analysis, Public Schools, National Competency Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Ferrao, Maria – Assessment & Evaluation in Higher Education, 2010
The Bologna Declaration brought reforms into higher education that imply changes in teaching methods, didactic materials and textbooks, infrastructures and laboratories, etc. Statistics and mathematics are disciplines that traditionally have the worst success rates, particularly in non-mathematics core curricula courses. This research project,…
Descriptors: Foreign Countries, Computer Assisted Testing, Educational Technology, Educational Assessment
Previous Page | Next Page »
Pages: 1  |  2