NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers4
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 31 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Carpentras, Dino; Quayle, Michael – International Journal of Social Research Methodology, 2023
Agent-based models (ABMs) often rely on psychometric constructs such as 'opinions', 'stubbornness', 'happiness', etc. The measurement process for these constructs is quite different from the one used in physics as there is no standardized unit of measurement for opinion or happiness. Consequently, measurements are usually affected by 'psychometric…
Descriptors: Psychometrics, Error of Measurement, Models, Prediction
Peer reviewed Peer reviewed
Direct linkDirect link
Han, Yuting; Zhang, Jihong; Jiang, Zhehan; Shi, Dexin – Educational and Psychological Measurement, 2023
In the literature of modern psychometric modeling, mostly related to item response theory (IRT), the fit of model is evaluated through known indices, such as X[superscript 2], M2, and root mean square error of approximation (RMSEA) for absolute assessments as well as Akaike information criterion (AIC), consistent AIC (CAIC), and Bayesian…
Descriptors: Goodness of Fit, Psychometrics, Error of Measurement, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Stefanie A. Wind; Yangmeng Xu – Educational Assessment, 2024
We explored three approaches to resolving or re-scoring constructed-response items in mixed-format assessments: rater agreement, person fit, and targeted double scoring (TDS). We used a simulation study to consider how the three approaches impact the psychometric properties of student achievement estimates, with an emphasis on person fit. We found…
Descriptors: Interrater Reliability, Error of Measurement, Evaluation Methods, Examiners
Sophie Lilit Litschwartz – ProQuest LLC, 2021
In education research test scores are a common object of analysis. Across studies test scores can be an important outcome, a highly predictive covariate, or a means of assigning treatment. However, test scores are a measure of an underlying proficiency we can't observe directly and so contain error. This measurement error has implications for how…
Descriptors: Scores, Inferences, Educational Research, Evaluation Methods
Xue Zhang; Chun Wang – Grantee Submission, 2022
Item-level fit analysis not only serves as a complementary check to global fit analysis, it is also essential in scale development because the fit results will guide item revision and/or deletion (Liu & Maydeu-Olivares, 2014). During data collection, missing response data may likely happen due to various reasons. Chi-square-based item fit…
Descriptors: Goodness of Fit, Item Response Theory, Scores, Test Length
Peer reviewed Peer reviewed
Direct linkDirect link
Radu Bogdan Toma – Journal of Early Adolescence, 2024
The Expectancy-Value model has been extensively used to understand students' achievement motivation. However, recent studies propose the inclusion of cost as a separate construct from values, leading to the development of the Expectancy-Value-Cost model. This study aimed to adapt Kosovich et al.'s ("The Journal of Early Adolescence", 35,…
Descriptors: Student Motivation, Student Attitudes, Academic Achievement, Mathematics Achievement
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Marcoulides, George A.; Li, Tenglong – Educational and Psychological Measurement, 2017
The measurement error in principal components extracted from a set of fallible measures is discussed and evaluated. It is shown that as long as one or more measures in a given set of observed variables contains error of measurement, so also does any principal component obtained from the set. The error variance in any principal component is shown…
Descriptors: Error of Measurement, Factor Analysis, Research Methodology, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Gingerich, Andrea; Ramlo, Susan E.; van der Vleuten, Cees P. M.; Eva, Kevin W.; Regehr, Glenn – Advances in Health Sciences Education, 2017
Whenever multiple observers provide ratings, even of the same performance, inter-rater variation is prevalent. The resulting "idiosyncratic rater variance" is considered to be unusable error of measurement in psychometric models and is a threat to the defensibility of our assessments. Prior studies of inter-rater variation in clinical…
Descriptors: Interrater Reliability, Error of Measurement, Psychometrics, Q Methodology
Leventhal, Brian – ProQuest LLC, 2017
More robust and rigorous psychometric models, such as multidimensional Item Response Theory models, have been advocated for survey applications. However, item responses may be influenced by construct-irrelevant variance factors such as preferences for extreme response options. Through empirical and simulation methods, this study evaluates the use…
Descriptors: Psychometrics, Item Response Theory, Simulation, Models
Domingue, Benjamin Webre – ProQuest LLC, 2012
In psychometrics, it is difficult to verify that measurement instruments can be used to produce numeric values with the desirable property that differences between units are equal-interval because the attributes being measured are latent. The theory of additive conjoint measurement (e.g., Krantz, Luce, Suppes, & Tversky, 1971, ACM) guarantees…
Descriptors: Psychometrics, Evaluation Methods, Error of Measurement, Intervals
Peer reviewed Peer reviewed
Direct linkDirect link
Griffith, James W.; Kleim, Birgit; Sumner, Jennifer A.; Ehlers, Anke – Psychological Assessment, 2012
The objective of this study was to examine the psychometric properties of the Autobiographical Memory Test (AMT), which is widely used to measure overgeneral autobiographical memory in individuals with depression and a trauma history. Its factor structure and internal consistency have not been explored in a clinical sample. This study examined the…
Descriptors: Memory, Test Construction, Evaluation Methods, Psychometrics
Mbella, Kinge Keka – ProQuest LLC, 2012
Mixed-format assessments are increasingly being used in large scale standardized assessments to measure a continuum of skills ranging from basic recall to higher order thinking skills. These assessments are usually comprised of a combination of (a) multiple-choice items which can be efficiently scored, have stable psychometric properties, and…
Descriptors: Educational Assessment, Test Format, Evaluation Methods, Multiple Choice Tests
Diakow, Ronli Phyllis – ProQuest LLC, 2013
This dissertation comprises three papers that propose, discuss, and illustrate models to make improved inferences about research questions regarding student achievement in education. Addressing the types of questions common in educational research today requires three different "extensions" to traditional educational assessment: (1)…
Descriptors: Inferences, Educational Assessment, Academic Achievement, Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Traynor, Anne; Raykov, Tenko – Comparative Education Review, 2013
In international achievement studies, questionnaires typically ask about the presence of particular household assets in students' homes. Responses to the assets questions are used to compute a total score, which is intended to represent household wealth in models of test performance. This study uses item analysis and confirmatory factor analysis…
Descriptors: Secondary School Students, Academic Achievement, Validity, Psychometrics
Brandt, Lorilynn – ProQuest LLC, 2010
Phonics was identified as one of the critical components in reading development by the National Reading Panel. Over time, research has repeatedly identified phonics as important to early reading development. Given the compelling evidence supporting the teaching of phonics in early reading, it is critical to make sure that instructional decisions…
Descriptors: Generalizability Theory, Phonics, Early Reading, Validity
Previous Page | Next Page ยป
Pages: 1  |  2  |  3