NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 113 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Aiman Mohammad Freihat; Omar Saleh Bani Yassin – Educational Process: International Journal, 2025
Background/purpose: This study aimed to reveal the accuracy of estimation of multiple-choice test items parameters following the models of the item-response theory in measurement. Materials/methods: The researchers depended on the measurement accuracy indicators, which express the absolute difference between the estimated and actual values of the…
Descriptors: Accuracy, Computation, Multiple Choice Tests, Test Items
Egamaria Alacam; Craig K. Enders; Han Du; Brian T. Keller – Grantee Submission, 2023
Composite scores are an exceptionally important psychometric tool for behavioral science research applications. A prototypical example occurs with self-report data, where researchers routinely use questionnaires with multiple items that tap into different features of a target construct. Item-level missing data are endemic to composite score…
Descriptors: Regression (Statistics), Scores, Psychometrics, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Carol Eckerly; Yue Jia; Paul Jewsbury – ETS Research Report Series, 2022
Testing programs have explored the use of technology-enhanced items alongside traditional item types (e.g., multiple-choice and constructed-response items) as measurement evidence of latent constructs modeled with item response theory (IRT). In this report, we discuss considerations in applying IRT models to a particular type of adaptive testlet…
Descriptors: Computer Assisted Testing, Test Items, Item Response Theory, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Meng, Yaru; Fu, Hua – Modern Language Journal, 2023
The distinguishing feature of dynamic assessment (DA) is the dialectical integration of assessment and instruction. However, how to design the targeted instruction or mediation has been relatively underexplored. To address this gap, this study proposes the attribute-based mediation model (AMM), an English-as-a-foreign-language listening mediation…
Descriptors: Evaluation Methods, Teaching Methods, Models, English (Second Language)
Peer reviewed Peer reviewed
Direct linkDirect link
Alpizar, David; Li, Tongyun; Norris, John M.; Gu, Lixiong – Language Testing, 2023
The C-test is a type of gap-filling test designed to efficiently measure second language proficiency. The typical C-test consists of several short paragraphs with the second half of every second word deleted. The words with deleted parts are considered as items nested within the corresponding paragraph. Given this testlet structure, it is commonly…
Descriptors: Psychometrics, Language Tests, Second Language Learning, Test Items
Chengcheng Li – ProQuest LLC, 2022
Categorical data become increasingly ubiquitous in the modern big data era. In this dissertation, we propose novel statistical learning and inference methods for large-scale categorical data, focusing on latent variable models and their applications to psychometrics. In psychometric assessments, the subjects' underlying aptitude often cannot be…
Descriptors: Statistical Inference, Data Analysis, Psychometrics, Raw Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Qi Huang; Daniel M. Bolt; Weicong Lyu – Large-scale Assessments in Education, 2024
Large scale international assessments depend on invariance of measurement across countries. An important consideration when observing cross-national differential item functioning (DIF) is whether the DIF actually reflects a source of bias, or might instead be a methodological artifact reflecting item response theory (IRT) model misspecification.…
Descriptors: Test Items, Item Response Theory, Test Bias, Test Validity
Qunbar, Sa'ed Ali – ProQuest LLC, 2019
This work presents a study that used distributed language representations of test items to model test item difficulty. Distributed language representations are low-dimensional numeric representations of written language inspired and generated by artificial neural network architecture. The research begins with a discussion of the importance of item…
Descriptors: Computer Assisted Testing, Test Items, Difficulty Level, Models
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Torre, Jimmy de la; Akbay, Lokman – Eurasian Journal of Educational Research, 2019
Purpose: Well-designed assessment methodologies and various cognitive diagnosis models (CDMs) to extract diagnostic information about examinees' individual strengths and weaknesses have been developed. Due to this novelty, as well as educational specialists' lack of familiarity with CDMs, their applications are not widespread. This article aims at…
Descriptors: Cognitive Measurement, Models, Computer Software, Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Madsen, Esben Elholm; Elbe, Anne-Marie; Krustrup, Peter; Larsen, Carsten Hvid; Larsen, Malte Nejst; Madsen, Mads; Hansen, Tina – Cogent Education, 2021
The trans-contextual model (TCM) offers a heuristic-based theoretical framework to understand fifth-grade Danish schoolchildren's motivation to participate in the 11 for Health in Denmark educational football concept, as well as their intention and behaviour to participate in vigorous physical activity (PA) in a leisure-time context. The…
Descriptors: Translation, Content Validity, Context Effect, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Chengyu Cui; Chun Wang; Gongjun Xu – Grantee Submission, 2024
Multidimensional item response theory (MIRT) models have generated increasing interest in the psychometrics literature. Efficient approaches for estimating MIRT models with dichotomous responses have been developed, but constructing an equally efficient and robust algorithm for polytomous models has received limited attention. To address this gap,…
Descriptors: Item Response Theory, Accuracy, Simulation, Psychometrics
Joshua B. Gilbert; James S. Kim; Luke W. Miratrix – Annenberg Institute for School Reform at Brown University, 2022
Analyses that reveal how treatment effects vary allow researchers, practitioners, and policymakers to better understand the efficacy of educational interventions. In practice, however, standard statistical methods for addressing Heterogeneous Treatment Effects (HTE) fail to address the HTE that may exist within outcome measures. In this study, we…
Descriptors: Item Response Theory, Models, Formative Evaluation, Statistical Inference
Nixi Wang – ProQuest LLC, 2022
Measurement errors attributable to cultural issues are complex and challenging for educational assessments. We need assessment tests sensitive to the cultural heterogeneity of populations, and psychometric methods appropriate to address fairness and equity concerns. Built on the research of culturally responsive assessment, this dissertation…
Descriptors: Culturally Relevant Education, Testing, Equal Education, Validity
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Afsharrad, Mohammad; Pishghadam, Reza; Baghaei, Purya – International Journal of Language Testing, 2023
Testing organizations are faced with increasing demand to provide subscores in addition to the total test score. However, psychometricians argue that most subscores do not have added value to be worth reporting. To have added value, subscores need to meet a number of criteria: they should be reliable, distinctive, and distinct from each other and…
Descriptors: Comparative Analysis, Scores, Value Added Models, Psychometrics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Storme, Martin; Myszkowski, Nils; Baron, Simon; Bernard, David – Journal of Intelligence, 2019
Assessing job applicants' general mental ability online poses psychometric challenges due to the necessity of having brief but accurate tests. Recent research (Myszkowski & Storme, 2018) suggests that recovering distractor information through Nested Logit Models (NLM; Suh & Bolt, 2010) increases the reliability of ability estimates in…
Descriptors: Intelligence Tests, Item Response Theory, Comparative Analysis, Test Reliability
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8