NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 6 results Save | Export
Kaplan, David; Chen, Jianschen; Yavuz, Sinan; Lyu, Weicong – Grantee Submission, 2022
The purpose of this paper is to demonstrate and evaluate the use of "Bayesian dynamic borrowing"(Viele et al, in Pharm Stat 13:41-54, 2014) as a means of systematically utilizing historical information with specific applications to large-scale educational assessments. Dynamic borrowing via Bayesian hierarchical models is a special case…
Descriptors: Bayesian Statistics, Models, Prediction, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Sainan Xu; Jing Lu; Jiwei Zhang; Chun Wang; Gongjun Xu – Grantee Submission, 2024
With the growing attention on large-scale educational testing and assessment, the ability to process substantial volumes of response data becomes crucial. Current estimation methods within item response theory (IRT), despite their high precision, often pose considerable computational burdens with large-scale data, leading to reduced computational…
Descriptors: Educational Assessment, Bayesian Statistics, Statistical Inference, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Selleri, Patrizia; Carugati, Felice – European Journal of Psychology of Education, 2018
There is a consensus that the items proposed by the Program for International Student Assessment (PISA) program allow us to focus on the outcomes of the processes of appropriation and transformation of learning tools at the end of compulsory schooling, particularly regarding the key competencies for lifelong learning and citizenship in digital…
Descriptors: Foreign Countries, Achievement Tests, Secondary School Students, International Assessment
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Wu, Mike; Davis, Richard L.; Domingue, Benjamin W.; Piech, Chris; Goodman, Noah – International Educational Data Mining Society, 2020
Item Response Theory (IRT) is a ubiquitous model for understanding humans based on their responses to questions, used in fields as diverse as education, medicine and psychology. Large modern datasets offer opportunities to capture more nuances in human behavior, potentially improving test scoring and better informing public policy. Yet larger…
Descriptors: Item Response Theory, Accuracy, Data Analysis, Public Policy
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Chen, Haiwen H.; von Davier, Matthias; Yamamoto, Kentaro; Kong, Nan – ETS Research Report Series, 2015
One major issue with large-scale assessments is that the respondents might give no responses to many items, resulting in less accurate estimations of both assessed abilities and item parameters. This report studies how the types of items affect the item-level nonresponse rates and how different methods of treating item-level nonresponses have an…
Descriptors: Achievement Tests, Foreign Countries, International Assessment, Secondary School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Bolt, Daniel M.; Newton, Joseph R. – Educational and Psychological Measurement, 2011
This article extends a methodological approach considered by Bolt and Johnson for the measurement and control of extreme response style (ERS) to the analysis of rating data from multiple scales. Specifically, it is shown how the simultaneous analysis of item responses across scales allows for more accurate identification of ERS, and more effective…
Descriptors: Response Style (Tests), Measurement, Item Response Theory, Accuracy