Publication Date
In 2025 | 3 |
Since 2024 | 3 |
Since 2021 (last 5 years) | 13 |
Since 2016 (last 10 years) | 42 |
Since 2006 (last 20 years) | 60 |
Descriptor
Item Analysis | 62 |
Item Response Theory | 62 |
Test Reliability | 62 |
Test Validity | 40 |
Test Items | 35 |
Psychometrics | 26 |
Test Construction | 24 |
Foreign Countries | 19 |
Scores | 14 |
Difficulty Level | 12 |
Scoring | 12 |
More ▼ |
Source
Author
Publication Type
Education Level
Audience
Location
Indonesia | 3 |
Iran | 2 |
North Carolina | 2 |
United States | 2 |
Alabama | 1 |
California | 1 |
China | 1 |
Denmark | 1 |
Florida | 1 |
France | 1 |
Hong Kong | 1 |
More ▼ |
Laws, Policies, & Programs
Individuals with Disabilities… | 1 |
Assessments and Surveys
Graduate Record Examinations | 1 |
Home Observation for… | 1 |
Kaufman Test of Educational… | 1 |
Motivated Strategies for… | 1 |
Trends in International… | 1 |
Wechsler Individual… | 1 |
Wechsler Intelligence Scale… | 1 |
What Works Clearinghouse Rating
Mahdi Ghorbankhani; Keyvan Salehi – SAGE Open, 2025
Academic procrastination, the tendency to delay academic tasks without reasonable justification, has significant implications for students' academic performance and overall well-being. To measure this construct, numerous scales have been developed, among which the Academic Procrastination Scale (APS) has shown promise in assessing academic…
Descriptors: Psychometrics, Measures (Individuals), Time Management, Foreign Countries
Thompson, Kathryn N. – ProQuest LLC, 2023
It is imperative to collect validity evidence prior to interpreting and using test scores. During the process of collecting validity evidence, test developers should consider whether test scores are contaminated by sources of extraneous information. This is referred to as construct irrelevant variance, or the "degree to which test scores are…
Descriptors: Test Wiseness, Test Items, Item Response Theory, Scores
Fergadiotis, Gerasimos; Casilio, Marianne; Dickey, Michael Walsh; Steel, Stacey; Nicholson, Hannele; Fleegle, Mikala; Swiderski, Alexander; Hula, William D. – Journal of Speech, Language, and Hearing Research, 2023
Purpose: Item response theory (IRT) is a modern psychometric framework with several advantageous properties as compared with classical test theory. IRT has been successfully used to model performance on anomia tests in individuals with aphasia; however, all efforts to date have focused on noun production accuracy. The purpose of this study is to…
Descriptors: Item Response Theory, Psychometrics, Verbs, Naming
Farshad Effatpanah; Purya Baghaei; Mona Tabatabaee-Yazdi; Esmat Babaii – Language Testing, 2025
This study aimed to propose a new method for scoring C-Tests as measures of general language proficiency. In this approach, the unit of analysis is sentences rather than gaps or passages. That is, the gaps correctly reformulated in each sentence were aggregated as sentence score, and then each sentence was entered into the analysis as a polytomous…
Descriptors: Item Response Theory, Language Tests, Test Items, Test Construction
Tyrone B. Pretorius; P. Paul Heppner; Anita Padmanabhanunni; Serena Ann Isaacs – SAGE Open, 2023
In previous studies, problem solving appraisal has been identified as playing a key role in promoting positive psychological well-being. The Problem Solving Inventory is the most widely used measure of problem solving appraisal and consists of 32 items. The length of the instrument, however, may limit its applicability to large-scale surveys…
Descriptors: Problem Solving, Measures (Individuals), Test Construction, Item Response Theory
Liu, Xiaowen; Jane Rogers, H. – Educational and Psychological Measurement, 2022
Test fairness is critical to the validity of group comparisons involving gender, ethnicities, culture, or treatment conditions. Detection of differential item functioning (DIF) is one component of efforts to ensure test fairness. The current study compared four treatments for items that have been identified as showing DIF: deleting, ignoring,…
Descriptors: Item Analysis, Comparative Analysis, Culture Fair Tests, Test Validity
Guo, Hongwen; Ling, Guangming; Frankel, Lois – ETS Research Report Series, 2020
With advances in technology, researchers and test developers are developing new item types to measure complex skills like problem solving and critical thinking. Analyzing such items is often challenging because of their complicated response patterns, and thus it is important to develop psychometric methods for practitioners and researchers to…
Descriptors: Test Construction, Test Items, Item Analysis, Psychometrics
Nurussaniah Nurussaniah; Punaji Setyosari; Dedi Kuswandi; Saida Ulfa – Journal of Baltic Science Education, 2025
The accurate assessment of analytical thinking in physics, particularly in magnetism, poses substantial challenges due to the limitations of conventional tools in measuring higher-order cognitive skills. This study aimed to validate an analytical skills test in physics, based on Bloom's Revised Taxonomy, with an emphasis on the dimensions of…
Descriptors: Physics, Science Tests, Science Instruction, Thinking Skills
Hartono, Wahyu; Hadi, Samsul; Rosnawati, Raden; Retnawati, Heri – Pegem Journal of Education and Instruction, 2023
Researchers design diagnostic assessments to measure students' knowledge structures and processing skills to provide information about their cognitive attribute. The purpose of this study is to determine the instrument's validity and score reliability, as well as to investigate the use of classical test theory to identify item characteristics. The…
Descriptors: Diagnostic Tests, Test Validity, Item Response Theory, Content Validity
Sheng, Yanyan – Measurement: Interdisciplinary Research and Perspectives, 2019
Classical approach to test theory has been the foundation for educational and psychological measurement for over 90 years. This approach concerns with measurement error and hence test reliability, which in part relies on individual test items. The CTT package, developed in light of this, provides functions for test- and item-level analyses of…
Descriptors: Item Response Theory, Test Reliability, Item Analysis, Error of Measurement
Sahin, Melek Gülsah; Yildirim, Yildiz; Boztunç Öztürk, Nagihan – Participatory Educational Research, 2023
Literature review shows that the development process of an achievement test is mainly investigated in dissertations. Moreover, preparing a form that will shed light on developing an achievement test is expected to guide those who will administer the test. In this line, the current study aims to create an "Achievement Test Development Process…
Descriptors: Achievement Tests, Test Construction, Records (Forms), Mathematics Achievement
Pennell, Adam; Patey, Matthew; Fisher, Jenna; Brian, Ali – Measurement in Physical Education and Exercise Science, 2022
Falls are a significant medical and economical concern worldwide. Younger individuals with visual impairment (VI) may be more susceptible to falling and fall-related injuries when compared to peers without a VI. Self-perceived balance confidence is a psychological construct that may predict and/or mediate fall- or other health-related outcomes in…
Descriptors: Psychomotor Skills, Self Efficacy, Accidents, Injuries
Rafi, Ibnu; Retnawati, Heri; Apino, Ezi; Hadiana, Deni; Lydiati, Ida; Rosyada, Munaya Nikma – Pedagogical Research, 2023
This study describes the characteristics of the test and its items used in the national-standardized school examination by applying classical test theory and focusing on the item difficulty, item discrimination, test reliability, and distractor analysis. We analyzed response data of 191 12th graders from one of public senior high schools in…
Descriptors: Foreign Countries, National Competency Tests, Standardized Tests, Mathematics Tests
Ji-young Shin – ProQuest LLC, 2021
The present dissertation investigated the impact of scales/scoring methods and prompt linguistic features on the measurement quality of L2 English elicited imitation (EI). Scales/scoring methods are an important feature for the validity and reliability of L2 EI test, but less is known (Yan et al., 2016). Prompt linguistic features are also known…
Descriptors: English (Second Language), Second Language Learning, Second Language Instruction, Semantics
Omarov, Nazarbek Bakytbekovich; Mohammed, Aisha; Alghurabi, Ammar Muhi Khleel; Alallo, Hajir Mahmood Ibrahim; Ali, Yusra Mohammed; Hassan, Aalaa Yaseen; Demeuova, Lyazat; Viktorovna, Shvedova Irina; Nazym, Bekenova; Al Khateeb, Nashaat Sultan Afif – International Journal of Language Testing, 2023
The Multiple-choice (MC) item format is commonly used in educational assessments due to its economy and effectiveness across a variety of content domains. However, numerous studies have examined the quality of MC items in high-stakes and higher-education assessments and found many flawed items, especially in terms of distractors. These faulty…
Descriptors: Test Items, Multiple Choice Tests, Item Response Theory, English (Second Language)