NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 222 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Javed Iqbal; Tanweer Ul Islam – Educational Research and Evaluation, 2024
Economic efficiency demands accurate assessment of individual ability for selection purposes. This study investigates Classical Test Theory (CTT) and Item Response Theory (IRT) for estimating true ability and ranking individuals. Two Monte Carlo simulations and real data analyses were conducted. Results suggest a slight advantage for IRT, but…
Descriptors: Item Response Theory, Monte Carlo Methods, Ability, Statistical Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Rukayat Oyebola Iwintolu; Oluwaseyi Aina Gbolade Opesemowo; Phebean Oluwaseyi Adetutu – Journal on Efficiency and Responsibility in Education and Science, 2024
The investigation delves into examining the influence of 2-parameter logistic (PL) and 3-parameter logistic models on the ability estimates of students in mathematical binary items. It ascertained the parameters of the items in the 2-PL and 3-PL models. We employed Item Response Theory (IRT) in the design of this research survey, with a sample…
Descriptors: Foreign Countries, Secondary School Mathematics, Secondary School Students, Mathematics Instruction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kurniasih, Nia; Emilia, Emi; Sujatna, Eva Tuckyta Sari – International Journal of Language Testing, 2023
This study aimed at evaluating a PISA-like reading test developed by teachers participating in the teacher training for teaching PISA-like reading. To serve this purpose, an experimental test was administered to 107 students aged 15-16 using a set of text and questions constructed according to the criteria of the PISA Reading test Level 1. Item…
Descriptors: International Assessment, Foreign Countries, Achievement Tests, Secondary School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Maria Bolsinova; Jesper Tijmstra; Leslie Rutkowski; David Rutkowski – Journal of Educational and Behavioral Statistics, 2024
Profile analysis is one of the main tools for studying whether differential item functioning can be related to specific features of test items. While relevant, profile analysis in its current form has two restrictions that limit its usefulness in practice: It assumes that all test items have equal discrimination parameters, and it does not test…
Descriptors: Test Items, Item Analysis, Generalizability Theory, Achievement Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Patterson, Leigh Cameron – Australian Journal of Education, 2023
Considerable interest lies in the growth in educational achievement that occurs over the course of a child's schooling. This paper demonstrates a simple but effective approach for the comparison of growth rates, drawing on a method first proposed some 80 years ago and applying it to data from the Australian National Assessment Program. The…
Descriptors: Item Response Theory, Growth Models, Psychometrics, National Competency Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Scharl, Anna; Zink, Eva – Large-scale Assessments in Education, 2022
Educational large-scale assessments (LSAs) often provide plausible values for the administered competence tests to facilitate the estimation of population effects. This requires the specification of a background model that is appropriate for the specific research question. Because the "German National Educational Panel Study" (NEPS) is…
Descriptors: National Competency Tests, Foreign Countries, Programming Languages, Longitudinal Studies
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sideridis, Georgios; Alahmadi, Maisa – Journal of Intelligence, 2022
The goal of the present study was to extend earlier work on the estimation of person theta using maximum likelihood estimation in R by accounting for rapid guessing. This paper provides a modified R function that accommodates person thetas using the Rasch or 2PL models and implements corrections for the presence of rapid guessing or informed…
Descriptors: Guessing (Tests), Reaction Time, Item Response Theory, Aptitude Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Zhan, Peida; Jiao, Hong; Liao, Dandan; Li, Feiming – Journal of Educational and Behavioral Statistics, 2019
Providing diagnostic feedback about growth is crucial to formative decisions such as targeted remedial instructions or interventions. This article proposed a longitudinal higher-order diagnostic classification modeling approach for measuring growth. The new modeling approach is able to provide quantitative values of overall and individual growth…
Descriptors: Classification, Growth Models, Educational Diagnosis, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Carmen Köhler; Lale Khorramdel; Artur Pokropek; Johannes Hartig – Journal of Educational Measurement, 2024
For assessment scales applied to different groups (e.g., students from different states; patients in different countries), multigroup differential item functioning (MG-DIF) needs to be evaluated in order to ensure that respondents with the same trait level but from different groups have equal response probabilities on a particular item. The…
Descriptors: Measures (Individuals), Test Bias, Models, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Ren; Liu, Haiyan; Shi, Dexin; Jiang, Zhehan – Educational and Psychological Measurement, 2022
Assessments with a large amount of small, similar, or often repetitive tasks are being used in educational, neurocognitive, and psychological contexts. For example, respondents are asked to recognize numbers or letters from a large pool of those and the number of correct answers is a count variable. In 1960, George Rasch developed the Rasch…
Descriptors: Classification, Models, Statistical Distributions, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Jin, Kuan-Yu; Siu, Wai-Lok; Huang, Xiaoting – Journal of Educational Measurement, 2022
Multiple-choice (MC) items are widely used in educational tests. Distractor analysis, an important procedure for checking the utility of response options within an MC item, can be readily implemented in the framework of item response theory (IRT). Although random guessing is a popular behavior of test-takers when answering MC items, none of the…
Descriptors: Guessing (Tests), Multiple Choice Tests, Item Response Theory, Attention
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Oliva, Jose M.; Blanco, Ángel – European Journal of Science and Mathematics Education, 2023
A questionnaire was recently developed for the use with the Spanish-speaking, and evidence have been provided about the construct internal validity by means of structural equation modelling. In this paper, two research questions were considered: (i) What new evidence does application of the Rasch model provide regarding the validity of this…
Descriptors: Spanish Speaking, High School Students, College Students, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Jana Welling; Timo Gnambs; Claus H. Carstensen – Educational and Psychological Measurement, 2024
Disengaged responding poses a severe threat to the validity of educational large-scale assessments, because item responses from unmotivated test-takers do not reflect their actual ability. Existing identification approaches rely primarily on item response times, which bears the risk of misclassifying fast engaged or slow disengaged responses.…
Descriptors: Foreign Countries, College Students, Guessing (Tests), Multiple Choice Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Walz, Kristina; Braun, Edith – Higher Education Forum, 2022
This paper examines a communication model based on six theoretical facets. Each facet was operationalised according to two aspects of Habermas' theory of communicative action: strategic and understanding-oriented action. The aim of the empirical analyses was to ascertain whether the postulated model could be used to measure different levels of…
Descriptors: Foreign Countries, College Students, Competence, Communication Skills
Peer reviewed Peer reviewed
Direct linkDirect link
Lawrence T. DeCarlo – Educational and Psychological Measurement, 2024
A psychological framework for different types of items commonly used with mixed-format exams is proposed. A choice model based on signal detection theory (SDT) is used for multiple-choice (MC) items, whereas an item response theory (IRT) model is used for open-ended (OE) items. The SDT and IRT models are shown to share a common conceptualization…
Descriptors: Test Format, Multiple Choice Tests, Item Response Theory, Models
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  15