Publication Date
In 2025 | 1 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 3 |
Descriptor
Computer Software | 3 |
Item Response Theory | 3 |
Test Format | 3 |
Test Items | 3 |
Artificial Intelligence | 2 |
Foreign Countries | 2 |
Achievement Tests | 1 |
Beliefs | 1 |
College Students | 1 |
Comparative Analysis | 1 |
Computation | 1 |
More ▼ |
Author
Ahmed Al - Badri | 1 |
Chen, Hui-Fang | 1 |
Jin, Kuan-Yu | 1 |
Kyung-Mi O. | 1 |
Mimi Ismail | 1 |
Said Al - Senaidi | 1 |
Wang, Wen-Chung | 1 |
Publication Type
Journal Articles | 3 |
Reports - Research | 3 |
Education Level
Elementary Education | 1 |
Grade 8 | 1 |
Higher Education | 1 |
Junior High Schools | 1 |
Middle Schools | 1 |
Postsecondary Education | 1 |
Secondary Education | 1 |
Audience
Location
Hong Kong | 1 |
Oman | 1 |
United States | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Program for International… | 1 |
Trends in International… | 1 |
What Works Clearinghouse Rating
Kyung-Mi O. – Language Testing in Asia, 2024
This study examines the efficacy of artificial intelligence (AI) in creating parallel test items compared to human-made ones. Two test forms were developed: one consisting of 20 existing human-made items and another with 20 new items generated with ChatGPT assistance. Expert reviews confirmed the content parallelism of the two test forms.…
Descriptors: Comparative Analysis, Artificial Intelligence, Computer Software, Test Items
Mimi Ismail; Ahmed Al - Badri; Said Al - Senaidi – Journal of Education and e-Learning Research, 2025
This study aimed to reveal the differences in individuals' abilities, their standard errors, and the psychometric properties of the test according to the two methods of applying the test (electronic and paper). The descriptive approach was used to achieve the study's objectives. The study sample consisted of 74 male and female students at the…
Descriptors: Achievement Tests, Computer Assisted Testing, Psychometrics, Item Response Theory
Wang, Wen-Chung; Chen, Hui-Fang; Jin, Kuan-Yu – Educational and Psychological Measurement, 2015
Many scales contain both positively and negatively worded items. Reverse recoding of negatively worded items might not be enough for them to function as positively worded items do. In this study, we commented on the drawbacks of existing approaches to wording effect in mixed-format scales and used bi-factor item response theory (IRT) models to…
Descriptors: Item Response Theory, Test Format, Language Usage, Test Items