NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 451 to 465 of 9,533 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Sen, Sedat – Creativity Research Journal, 2022
The purpose of this study was to estimate the overall reliability values for the scores produced by Runco Ideational Behavior Scale (RIBS) and explore the variability of RIBS score reliability across studies. To achieve this, a reliability generalization meta-analysis was carried out using the 86 Cronbach's alpha estimates obtained from 77 studies…
Descriptors: Generalization, Creativity, Meta Analysis, Higher Education
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kilic, Abdullah Faruk; Uysal, Ibrahim – International Journal of Assessment Tools in Education, 2022
Most researchers investigate the corrected item-total correlation of items when analyzing item discrimination in multi-dimensional structures under the Classical Test Theory, which might lead to underestimating item discrimination, thereby removing items from the test. Researchers might investigate the corrected item-total correlation with the…
Descriptors: Item Analysis, Correlation, Item Response Theory, Test Items
Yiqin Pan – ProQuest LLC, 2022
Item preknowledge refers to the phenomenon in which some examinees have access to live items before taking a test. It is one of the most common and significant concerns within the testing industry. Thus, various statistical methods have been proposed to detect item preknowledge in computerized linear or adaptive testing. However, the success of…
Descriptors: Artificial Intelligence, Prior Learning, Test Items, Algorithms
Ozge Ersan Cinar – ProQuest LLC, 2022
In educational tests, a group of questions related to a shared stimulus is called a testlet (e.g., a reading passage with multiple related questions). Use of testlets is very common in educational tests. Additionally, computerized adaptive testing (CAT) is a mode of testing where the test forms are created in real time tailoring to the test…
Descriptors: Test Items, Computer Assisted Testing, Adaptive Testing, Educational Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Harrison, Scott; Kroehne, Ulf; Goldhammer, Frank; Lüdtke, Oliver; Robitzsch, Alexander – Large-scale Assessments in Education, 2023
Background: Mode effects, the variations in item and scale properties attributed to the mode of test administration (paper vs. computer), have stimulated research around test equivalence and trend estimation in PISA. The PISA assessment framework provides the backbone to the interpretation of the results of the PISA test scores. However, an…
Descriptors: Scoring, Test Items, Difficulty Level, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Qi; Bolt, Daniel M. – Educational and Psychological Measurement, 2023
Previous studies have demonstrated evidence of latent skill continuity even in tests intentionally designed for measurement of binary skills. In addition, the assumption of binary skills when continuity is present has been shown to potentially create a lack of invariance in item and latent ability parameters that may undermine applications. In…
Descriptors: Item Response Theory, Test Items, Skill Development, Robustness (Statistics)
Peer reviewed Peer reviewed
Direct linkDirect link
Luan, Lin; Liang, Jyh-Chong; Chai, Ching Sing; Lin, Tzu-Bin; Dong, Yan – Interactive Learning Environments, 2023
The emergence of new media technologies has empowered individuals to not merely consume but also create, share and critique media contents. Such activities are dependent on new media literacy (NML) necessary for living and working in the participatory culture of the twenty-first century. Although a burgeoning body of research has focused on the…
Descriptors: Foreign Countries, Media Literacy, Test Construction, English (Second Language)
Peer reviewed Peer reviewed
Direct linkDirect link
Fergadiotis, Gerasimos; Casilio, Marianne; Dickey, Michael Walsh; Steel, Stacey; Nicholson, Hannele; Fleegle, Mikala; Swiderski, Alexander; Hula, William D. – Journal of Speech, Language, and Hearing Research, 2023
Purpose: Item response theory (IRT) is a modern psychometric framework with several advantageous properties as compared with classical test theory. IRT has been successfully used to model performance on anomia tests in individuals with aphasia; however, all efforts to date have focused on noun production accuracy. The purpose of this study is to…
Descriptors: Item Response Theory, Psychometrics, Verbs, Naming
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Acikgul, Kubra; Sad, Suleyman Nihat; Altay, Bilal – International Journal of Assessment Tools in Education, 2023
This study aimed to develop a useful test to measure university students' spatial abilities validly and reliably. Following a sequential explanatory mixed methods research design, first, qualitative methods were used to develop the trial items for the test; next, the psychometric properties of the test were analyzed through quantitative methods…
Descriptors: Spatial Ability, Scores, Multiple Choice Tests, Test Validity
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Basman, Munevver – International Journal of Assessment Tools in Education, 2023
To ensure the validity of the tests is to check that all items have similar results across different groups of individuals. However, differential item functioning (DIF) occurs when the results of individuals with equal ability levels from different groups differ from each other on the same test item. Based on Item Response Theory and Classic Test…
Descriptors: Test Bias, Test Items, Test Validity, Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
McGuire, Michael J. – International Journal for the Scholarship of Teaching and Learning, 2023
College students in a lower-division psychology course made metacognitive judgments by predicting and postdicting performance for true-false, multiple-choice, and fill-in-the-blank question sets on each of three exams. This study investigated which question format would result in the most accurate metacognitive judgments. Extending Koriat's (1997)…
Descriptors: Metacognition, Multiple Choice Tests, Accuracy, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Alahmadi, Sarah; Jones, Andrew T.; Barry, Carol L.; Ibáñez, Beatriz – Applied Measurement in Education, 2023
Rasch common-item equating is often used in high-stakes testing to maintain equivalent passing standards across test administrations. If unaddressed, item parameter drift poses a major threat to the accuracy of Rasch common-item equating. We compared the performance of well-established and newly developed drift detection methods in small and large…
Descriptors: Equated Scores, Item Response Theory, Sample Size, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Kreitchmann, Rodrigo S.; Sorrel, Miguel A.; Abad, Francisco J. – Educational and Psychological Measurement, 2023
Multidimensional forced-choice (FC) questionnaires have been consistently found to reduce the effects of socially desirable responding and faking in noncognitive assessments. Although FC has been considered problematic for providing ipsative scores under the classical test theory, item response theory (IRT) models enable the estimation of…
Descriptors: Measurement Techniques, Questionnaires, Social Desirability, Adaptive Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Saputra, H.; Suhandi, A.; Setiawan, A.; Permanasari, A.; Firmansyah, J. – Physics Education, 2023
This study proposes an online practicum model supported by Wireless Sensor Network (WSN) to implement a physics practicum after the COVID-19 Pandemic. This system is guided by exploratory inquiry questions to help structure students' mindsets in answering investigative questions. Online practicum is also integrated with video conferencing, chat,…
Descriptors: Science Tests, Computer Assisted Testing, Physics, Practicums
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Wan, Siyu; Keller, Lisa A. – Practical Assessment, Research & Evaluation, 2023
Statistical process control (SPC) charts have been widely used in the field of educational measurement. The cumulative sum (CUSUM) is an established SPC method to detect aberrant responses for educational assessments. There are many studies that investigated the performance of CUSUM in different test settings. This paper describes the CUSUM…
Descriptors: Visual Aids, Educational Assessment, Evaluation Methods, Item Response Theory
Pages: 1  |  ...  |  27  |  28  |  29  |  30  |  31  |  32  |  33  |  34  |  35  |  ...  |  636