Publication Date
In 2025 | 39 |
Since 2024 | 192 |
Since 2021 (last 5 years) | 495 |
Since 2016 (last 10 years) | 996 |
Since 2006 (last 20 years) | 2028 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
Researchers | 93 |
Practitioners | 23 |
Teachers | 22 |
Policymakers | 10 |
Administrators | 5 |
Students | 4 |
Counselors | 2 |
Parents | 2 |
Community | 1 |
Location
United States | 47 |
Germany | 42 |
Australia | 34 |
Canada | 27 |
Turkey | 27 |
California | 22 |
United Kingdom (England) | 20 |
Netherlands | 18 |
China | 16 |
New York | 15 |
United Kingdom | 15 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Does not meet standards | 1 |
Zsuzsa Bakk – Structural Equation Modeling: A Multidisciplinary Journal, 2024
A standard assumption of latent class (LC) analysis is conditional independence, that is the items of the LC are independent of the covariates given the LCs. Several approaches have been proposed for identifying violations of this assumption. The recently proposed likelihood ratio approach is compared to residual statistics (bivariate residuals…
Descriptors: Goodness of Fit, Error of Measurement, Comparative Analysis, Models
Christine E. DeMars; Paulius Satkus – Educational and Psychological Measurement, 2024
Marginal maximum likelihood, a common estimation method for item response theory models, is not inherently a Bayesian procedure. However, due to estimation difficulties, Bayesian priors are often applied to the likelihood when estimating 3PL models, especially with small samples. Little focus has been placed on choosing the priors for marginal…
Descriptors: Item Response Theory, Statistical Distributions, Error of Measurement, Bayesian Statistics
Yanxuan Qu; Sandip Sinharay – ETS Research Report Series, 2024
The goal of this paper is to find better ways to estimate the internal consistency reliability of scores on tests with a specific type of design that are often encountered in practice: tests with constructed-response items clustered into sections that are not parallel or tau-equivalent, and one of the sections has only one item. To estimate the…
Descriptors: Test Reliability, Essay Tests, Construct Validity, Error of Measurement
Huang, Francis L.; Zhang, Bixi; Li, Xintong – Journal of Research on Educational Effectiveness, 2023
Binary outcomes are often analyzed in cluster randomized trials (CRTs) using logistic regression and cluster robust standard errors (CRSEs) are routinely used to account for the dependent nature of nested data in such models. However, CRSEs can be problematic when the number of clusters is low (e.g., < 50) and, with CRTs, a low number of…
Descriptors: Robustness (Statistics), Error of Measurement, Regression (Statistics), Multivariate Analysis
Moretti, Angelo; Whitworth, Adam – Sociological Methods & Research, 2023
Spatial microsimulation encompasses a range of alternative methodological approaches for the small area estimation (SAE) of target population parameters from sample survey data down to target small areas in contexts where such data are desired but not otherwise available. Although widely used, an enduring limitation of spatial microsimulation SAE…
Descriptors: Simulation, Geometric Concepts, Computation, Measurement
Raggi, Martina; Stanghellini, Elena; Doretti, Marco – Sociological Methods & Research, 2023
The decomposition of the overall effect of a treatment into direct and indirect effects is here investigated with reference to a recursive system of binary random variables. We show how, for the single mediator context, the marginal effect measured on the log odds scale can be written as the sum of the indirect and direct effects plus a residual…
Descriptors: Path Analysis, Student Attitudes, Museums, Error of Measurement
Carpentras, Dino; Quayle, Michael – International Journal of Social Research Methodology, 2023
Agent-based models (ABMs) often rely on psychometric constructs such as 'opinions', 'stubbornness', 'happiness', etc. The measurement process for these constructs is quite different from the one used in physics as there is no standardized unit of measurement for opinion or happiness. Consequently, measurements are usually affected by 'psychometric…
Descriptors: Psychometrics, Error of Measurement, Models, Prediction
Jiang, Zhehan; Raymond, Mark; DiStefano, Christine; Shi, Dexin; Liu, Ren; Sun, Junhua – Educational and Psychological Measurement, 2022
Computing confidence intervals around generalizability coefficients has long been a challenging task in generalizability theory. This is a serious practical problem because generalizability coefficients are often computed from designs where some facets have small sample sizes, and researchers have little guide regarding the trustworthiness of the…
Descriptors: Monte Carlo Methods, Intervals, Generalizability Theory, Error of Measurement
Schnell, Rainer; Redlich, Sarah; Göritz, Anja S. – Field Methods, 2022
Frequency of behaviors or amounts of variables of interest are essential topics in many surveys. The use of heuristics might cause rounded answers, resulting in the increased occurrence of end-digits (called heaping or digit-preference). For web surveys (or CASI), we propose using a conditional prompt as input validation if digits indicating…
Descriptors: Online Surveys, Validity, Numbers, Computation
Philipp Sterner; Kim De Roover; David Goretzko – Structural Equation Modeling: A Multidisciplinary Journal, 2025
When comparing relations and means of latent variables, it is important to establish measurement invariance (MI). Most methods to assess MI are based on confirmatory factor analysis (CFA). Recently, new methods have been developed based on exploratory factor analysis (EFA); most notably, as extensions of multi-group EFA, researchers introduced…
Descriptors: Error of Measurement, Measurement Techniques, Factor Analysis, Structural Equation Models
Hung-Yu Huang – Educational and Psychological Measurement, 2025
The use of discrete categorical formats to assess psychological traits has a long-standing tradition that is deeply embedded in item response theory models. The increasing prevalence and endorsement of computer- or web-based testing has led to greater focus on continuous response formats, which offer numerous advantages in both respondent…
Descriptors: Response Style (Tests), Psychological Characteristics, Item Response Theory, Test Reliability
Jonas Flodén – British Educational Research Journal, 2025
This study compares how the generative AI (GenAI) large language model (LLM) ChatGPT performs in grading university exams compared to human teachers. Aspects investigated include consistency, large discrepancies and length of answer. Implications for higher education, including the role of teachers and ethics, are also discussed. Three…
Descriptors: College Faculty, Artificial Intelligence, Comparative Testing, Scoring
Tong Wu; Stella Y. Kim; Carl Westine; Michelle Boyer – Journal of Educational Measurement, 2025
While significant attention has been given to test equating to ensure score comparability, limited research has explored equating methods for rater-mediated assessments, where human raters inherently introduce error. If not properly addressed, these errors can undermine score interchangeability and test validity. This study proposes an equating…
Descriptors: Item Response Theory, Evaluators, Error of Measurement, Test Validity
Stephanie M. Bell; R. Philip Chalmers; David B. Flora – Educational and Psychological Measurement, 2024
Coefficient omega indices are model-based composite reliability estimates that have become increasingly popular. A coefficient omega index estimates how reliably an observed composite score measures a target construct as represented by a factor in a factor-analysis model; as such, the accuracy of omega estimates is likely to depend on correct…
Descriptors: Influences, Models, Measurement Techniques, Reliability
Alexander Robitzsch; Oliver Lüdtke – Measurement: Interdisciplinary Research and Perspectives, 2024
Educational large-scale assessment (LSA) studies like the program for international student assessment (PISA) provide important information about trends in the performance of educational indicators in cognitive domains. The change in the country means in a cognitive domain like reading between two successive assessments is an example of a trend…
Descriptors: Secondary School Students, Foreign Countries, International Assessment, Achievement Tests