NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 98 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Aiman Mohammad Freihat; Omar Saleh Bani Yassin – Educational Process: International Journal, 2025
Background/purpose: This study aimed to reveal the accuracy of estimation of multiple-choice test items parameters following the models of the item-response theory in measurement. Materials/methods: The researchers depended on the measurement accuracy indicators, which express the absolute difference between the estimated and actual values of the…
Descriptors: Accuracy, Computation, Multiple Choice Tests, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Neda Kianinezhad; Mohsen Kianinezhad – Language Education & Assessment, 2025
This study presents a comparative analysis of classical reliability measures, including Cronbach's alpha, test-retest, and parallel forms reliability, alongside modern psychometric methods such as the Rasch model and Mokken scaling, to evaluate the reliability of C-tests in language proficiency assessment. Utilizing data from 150 participants…
Descriptors: Psychometrics, Test Reliability, Language Proficiency, Language Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Dahl, Laura S.; Staples, B. Ashley; Mayhew, Matthew J.; Rockenbach, Alyssa N. – Innovative Higher Education, 2023
Surveys with rating scales are often used in higher education research to measure student learning and development, yet testing and reporting on the longitudinal psychometric properties of these instruments is rare. Rasch techniques allow scholars to map item difficulty and individual aptitude on the same linear, continuous scale to compare…
Descriptors: Surveys, Rating Scales, Higher Education, Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Rodrigo Moreta-Herrera; Xavier Oriol-Granado; Mònica González; Jose A. Rodas – Infant and Child Development, 2025
This study evaluates the Children's Worlds Psychological Well-Being Scale (CW-PSWBS) within a diverse international cohort of children aged 10 and 12, utilising Classical Test Theory (CTT) and Item Response Theory (IRT) methodologies. Through a detailed psychometric analysis, this research assesses the CW-PSWBS's structural integrity, focusing on…
Descriptors: Well Being, Rating Scales, Children, Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Mimi Ismail; Ahmed Al - Badri; Said Al - Senaidi – Journal of Education and e-Learning Research, 2025
This study aimed to reveal the differences in individuals' abilities, their standard errors, and the psychometric properties of the test according to the two methods of applying the test (electronic and paper). The descriptive approach was used to achieve the study's objectives. The study sample consisted of 74 male and female students at the…
Descriptors: Achievement Tests, Computer Assisted Testing, Psychometrics, Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Zenger, Tim; Bitzenbauer, Philipp – Science Education International, 2022
This article reports on the development and piloting of a German version of a concept test to assess students' conceptual knowledge of density. The concept test was administered in paper-pencil format to 222 German secondary school students as a post-test after instruction in all relevant concepts of density. We provide a psychometric…
Descriptors: Foreign Countries, Secondary School Students, Concept Formation, Psychometrics
Yoo Jeong Jang – ProQuest LLC, 2022
Despite the increasing demand for diagnostic information, observed subscores have been often reported to lack adequate psychometric qualities such as reliability, distinctiveness, and validity. Therefore, several statistical techniques based on CTT and IRT frameworks have been proposed to improve the quality of subscores. More recently, DCM has…
Descriptors: Classification, Accuracy, Item Response Theory, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Rodriguez, Rebekah M.; Silvia, Paul J.; Kaufman, James C.; Reiter-Palmon, Roni; Puryear, Jeb S. – Creativity Research Journal, 2023
The original 90-item Creative Behavior Inventory (CBI) was a landmark self-report scale in creativity research, and the 28-item brief form developed nearly 20 years ago continues to be a popular measure of everyday creativity. Relatively little is known, however, about the psychometric properties of this widely used scale. In the current research,…
Descriptors: Creativity Tests, Creativity, Creative Thinking, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Joshua B. Gilbert; Luke W. Miratrix; Mridul Joshi; Benjamin W. Domingue – Journal of Educational and Behavioral Statistics, 2025
Analyzing heterogeneous treatment effects (HTEs) plays a crucial role in understanding the impacts of educational interventions. A standard practice for HTE analysis is to examine interactions between treatment status and preintervention participant characteristics, such as pretest scores, to identify how different groups respond to treatment.…
Descriptors: Causal Models, Item Response Theory, Statistical Inference, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Stephanie M. Werner; Ying Chen; Mike Stieff – Journal of Chemical Education, 2021
The Chemistry Self-Concept Inventory (CSCI) is a widely used instrument within chemistry education research. Yet, agreement on its overall reliability and validity is lacking, and psychometric analyses of the instrument remain outstanding. This study examined the psychometric properties of the subscale and item function of the CSCI on 1140 high…
Descriptors: Self Concept Measures, Chemistry, Psychometrics, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Musa Adekunle Ayanwale – Discover Education, 2023
Examination scores obtained by students from the West African Examinations Council (WAEC), and National Business and Technical Examinations Board (NABTEB) may not be directly comparable due to differences in examination administration, item characteristics of the subject in question, and student abilities. For more accurate comparisons, scores…
Descriptors: Equated Scores, Mathematics Tests, Test Items, Test Format
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Hussein, Rasha Abed; Sabit, Shaker Holh; Alwan, Merriam Ghadhanfar; Wafqan, Hussam Mohammed; Baqer, Abeer Ameen; Ali, Muneam Hussein; Hachim, Safa K.; Sahi, Zahraa Tariq; AlSalami, Huda Takleef; Sulaiman, Bahaa Aldin Fawzi – International Journal of Language Testing, 2022
Dictation is a traditional technique for both teaching and testing overall language ability and listening comprehension. In a dictation, a passage is read aloud by the teacher and examinees write down what they hear. Due to the peculiar form of dictations, psychometric analysis of dictations is challenging. In a dictation, there is no clear…
Descriptors: Psychometrics, Verbal Communication, Teaching Methods, Language Skills
Peer reviewed Peer reviewed
Direct linkDirect link
Roelofs, Erik C.; Emons, Wilco H. M.; Verschoor, Angela J. – International Journal of Testing, 2021
This study reports on an Evidence Centered Design (ECD) project in the Netherlands, involving the theory exam for prospective car drivers. In particular, we illustrate how cognitive load theory, task-analysis, response process models, and explanatory item-response theory can be used to systematically develop and refine task models. Based on a…
Descriptors: Foreign Countries, Psychometrics, Test Items, Evidence Based Practice
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Fadillah, Sarah Meilani; Ha, Minsu; Nuraeni, Eni; Indriyanti, Nurma Yunita – Malaysian Journal of Learning and Instruction, 2023
Purpose: Researchers discovered that when students were given the opportunity to change their answers, a majority changed their responses from incorrect to correct, and this change often increased the overall test results. What prompts students to modify their answers? This study aims to examine the modification of scientific reasoning test, with…
Descriptors: Science Tests, Multiple Choice Tests, Test Items, Decision Making
Peer reviewed Peer reviewed
Direct linkDirect link
Qi Huang; Daniel M. Bolt; Weicong Lyu – Large-scale Assessments in Education, 2024
Large scale international assessments depend on invariance of measurement across countries. An important consideration when observing cross-national differential item functioning (DIF) is whether the DIF actually reflects a source of bias, or might instead be a methodological artifact reflecting item response theory (IRT) model misspecification.…
Descriptors: Test Items, Item Response Theory, Test Bias, Test Validity
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7