Publication Date
In 2025 | 5 |
Since 2024 | 15 |
Since 2021 (last 5 years) | 74 |
Since 2016 (last 10 years) | 141 |
Since 2006 (last 20 years) | 215 |
Descriptor
Comparative Analysis | 336 |
Test Format | 336 |
Computer Assisted Testing | 109 |
Foreign Countries | 107 |
Test Items | 106 |
Scores | 83 |
Multiple Choice Tests | 73 |
Language Tests | 63 |
English (Second Language) | 58 |
Second Language Learning | 57 |
Statistical Analysis | 57 |
More ▼ |
Source
Author
Kim, Sooyeon | 5 |
DeBoer, George E. | 3 |
Hardcastle, Joseph | 3 |
Herrmann-Abell, Cari F. | 3 |
Liu, Jinghua | 3 |
Ali, Usama S. | 2 |
Allen, Nancy L. | 2 |
Anderson, Paul S. | 2 |
Ashraf, Hamid | 2 |
Ayan, Cansu | 2 |
Chapman, Mark | 2 |
More ▼ |
Publication Type
Education Level
Audience
Researchers | 3 |
Practitioners | 1 |
Teachers | 1 |
Location
Iran | 7 |
Turkey | 7 |
China | 6 |
Japan | 6 |
Germany | 5 |
Indonesia | 5 |
Sweden | 5 |
United Kingdom | 4 |
Australia | 3 |
California | 3 |
Canada | 3 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Sohee Kim; Ki Lynn Cole – International Journal of Testing, 2025
This study conducted a comprehensive comparison of Item Response Theory (IRT) linking methods applied to a bifactor model, examining their performance on both multiple choice (MC) and mixed format tests within the common item nonequivalent group design framework. Four distinct multidimensional IRT linking approaches were explored, consisting of…
Descriptors: Item Response Theory, Comparative Analysis, Models, Item Analysis
Jessie Leigh Nielsen; Rikke Vang Christensen; Mads Poulsen – Journal of Research in Reading, 2024
Background: Studies of syntactic comprehension and reading comprehension use a wide range of syntactic comprehension tests that vary considerably in format. The goal of this study was to examine to which extent different formats of syntactic comprehension tests measure the same construct. Methods: Sixty-nine Grade 4 students completed multiple…
Descriptors: Syntax, Reading Comprehension, Comparative Analysis, Reading Tests
Hung Tan Ha; Duyen Thi Bich Nguyen; Tim Stoeckel – Language Assessment Quarterly, 2025
This article compares two methods for detecting local item dependence (LID): residual correlation examination and Rasch testlet modeling (RTM), in a commonly used 3:6 matching format and an extended matching test (EMT) format. The two formats are hypothesized to facilitate different levels of item dependency due to differences in the number of…
Descriptors: Comparative Analysis, Language Tests, Test Items, Item Analysis
Zhang, Xijuan; Zhou, Linnan; Savalei, Victoria – Educational and Psychological Measurement, 2023
Zhang and Savalei proposed an alternative scale format to the Likert format, called the Expanded format. In this format, response options are presented in complete sentences, which can reduce acquiescence bias and method effects. The goal of the current study was to compare the psychometric properties of the Rosenberg Self-Esteem Scale (RSES) in…
Descriptors: Psychometrics, Self Concept Measures, Self Esteem, Comparative Analysis
Tsai, Pei-Chun; Sachdeva, Chhavi; Gilbert, Sam J.; Scarampi, Chiara – Applied Cognitive Psychology, 2023
Saving information onto external resources can improve memory for subsequent information--a phenomenon known as the saving-enhanced memory effect. This article reports two preregistered online experiments investigating (A) whether this effect holds when to-be-remembered information is presented before the saved information and (B) whether people…
Descriptors: Memory, Decision Making, Word Lists, Learning Strategies
Santi Lestari – Research Matters, 2024
Despite the increasing ubiquity of computer-based tests, many general qualifications examinations remain in a paper-based mode. Insufficient and unequal digital provision across schools is often identified as a major barrier to a full adoption of computer-based exams for general qualifications. One way to overcome this barrier is a gradual…
Descriptors: Keyboarding (Data Entry), Handwriting, Test Format, Comparative Analysis
Jeff Allen; Jay Thomas; Stacy Dreyer; Scott Johanningmeier; Dana Murano; Ty Cruce; Xin Li; Edgar Sanchez – ACT Education Corp., 2025
This report describes the process of developing and validating the enhanced ACT. The report describes the changes made to the test content and the processes by which these design decisions were implemented. The authors describe how they shared the overall scope of the enhancements, including the initial blueprints, with external expert panels,…
Descriptors: College Entrance Examinations, Testing, Change, Test Construction
Srikanth Allamsetty; M. V. S. S. Chandra; Neelima Madugula; Byamakesh Nayak – IEEE Transactions on Learning Technologies, 2024
The present study is related to the problem associated with student assessment with online examinations at higher educational institutes (HEIs). With the current COVID-19 outbreak, the majority of educational institutes are conducting online examinations to assess their students, where there would always be a chance that the students go for…
Descriptors: Computer Assisted Testing, Accountability, Higher Education, Comparative Analysis
Lishi Liang; W. L. Quint Oga-Baldwin; Kaori Nakao; Luke K. Fryer; Alex Shum – Technology in Language Teaching & Learning, 2024
Phonological processing of written characters has been recognized as a crucial element in acquiring literacy in any language, both native and foreign. This study aimed to assess Japanese primary school students' phoneme-grapheme recognition skills using both paper-based and touch-interface tests. Differences between the two test formats and the…
Descriptors: Phoneme Grapheme Correspondence, Language Tests, Gamification, Elementary School Students
Shaojie Wang; Won-Chan Lee; Minqiang Zhang; Lixin Yuan – Applied Measurement in Education, 2024
To reduce the impact of parameter estimation errors on IRT linking results, recent work introduced two information-weighted characteristic curve methods for dichotomous items. These two methods showed outstanding performance in both simulation and pseudo-form pseudo-group analysis. The current study expands upon the concept of information…
Descriptors: Item Response Theory, Test Format, Test Length, Error of Measurement
Jones, Paul; Tong, Ye; Liu, Jinghua; Borglum, Joshua; Primoli, Vince – Journal of Educational Measurement, 2022
This article studied two methods to detect mode effects in two credentialing exams. In Study 1, we used a "modal scale comparison approach," where the same pool of items was calibrated separately, without transformation, within two TC cohorts (TC1 and TC2) and one OP cohort (OP1) matched on their pool-based scale score distributions. The…
Descriptors: Scores, Credentials, Licensing Examinations (Professions), Computer Assisted Testing
Harrison, Scott; Kroehne, Ulf; Goldhammer, Frank; Lüdtke, Oliver; Robitzsch, Alexander – Large-scale Assessments in Education, 2023
Background: Mode effects, the variations in item and scale properties attributed to the mode of test administration (paper vs. computer), have stimulated research around test equivalence and trend estimation in PISA. The PISA assessment framework provides the backbone to the interpretation of the results of the PISA test scores. However, an…
Descriptors: Scoring, Test Items, Difficulty Level, Foreign Countries
Dongmei Li; Shalini Kapoor; Ann Arthur; Chi-Yu Huang; YoungWoo Cho; Chen Qiu; Hongling Wang – ACT Education Corp., 2025
Starting in April 2025, ACT will introduce enhanced forms of the ACT® test for national online testing, with a full rollout to all paper and online test takers in national, state and district, and international test administrations by Spring 2026. ACT introduced major updates by changing the test lengths and testing times, providing more time per…
Descriptors: College Entrance Examinations, Testing, Change, Scoring
Mi-Hyun Bang; Young-Min Lee – Education and Information Technologies, 2024
The Human Resources Development Service of Korea developed a digital exam for five representative engineering categories and conducted a pilot study comparing the findings with the paper-and-pencil exam results from the last three years. This study aimed to compare the test efficiency between digital and paper-and-pencil examinations. A digital…
Descriptors: Engineering Education, Computer Assisted Testing, Foreign Countries, Human Resources
Giofrè, D.; Allen, K.; Toffalini, E.; Caviola, S. – Educational Psychology Review, 2022
This meta-analysis reviews 79 studies (N = 46,605) that examined the existence of gender difference on intelligence in school-aged children. To do so, we limited the literature search to works that assessed the construct of intelligence through the Wechsler Intelligence Scales for Children (WISC) batteries, evaluating eventual gender differences…
Descriptors: Gender Differences, Cognitive Processes, Children, Intelligence Tests