Publication Date
In 2025 | 1 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 13 |
Since 2016 (last 10 years) | 24 |
Since 2006 (last 20 years) | 33 |
Descriptor
Scores | 81 |
Test Construction | 81 |
Test Format | 81 |
Test Items | 40 |
Test Validity | 23 |
Computer Assisted Testing | 20 |
Multiple Choice Tests | 18 |
Test Reliability | 16 |
Foreign Countries | 15 |
Comparative Analysis | 13 |
Higher Education | 12 |
More ▼ |
Source
Author
Publication Type
Education Level
Higher Education | 14 |
Postsecondary Education | 10 |
Elementary Education | 3 |
Secondary Education | 3 |
High Schools | 2 |
Elementary Secondary Education | 1 |
Grade 5 | 1 |
Grade 6 | 1 |
Intermediate Grades | 1 |
Middle Schools | 1 |
Location
Delaware | 3 |
Turkey | 3 |
China | 2 |
Connecticut | 1 |
Germany | 1 |
Indonesia | 1 |
Israel | 1 |
Italy | 1 |
Louisiana | 1 |
Missouri | 1 |
Netherlands | 1 |
More ▼ |
Laws, Policies, & Programs
No Child Left Behind Act 2001 | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Mustafa Ilhan; Nese Güler; Gülsen Tasdelen Teker; Ömer Ergenekon – International Journal of Assessment Tools in Education, 2024
This study aimed to examine the effects of reverse items created with different strategies on psychometric properties and respondents' scale scores. To this end, three versions of a 10-item scale in the research were developed: 10 positive items were integrated in the first form (Form-P) and five positive and five reverse items in the other two…
Descriptors: Test Items, Psychometrics, Scores, Measures (Individuals)
Jeff Allen; Jay Thomas; Stacy Dreyer; Scott Johanningmeier; Dana Murano; Ty Cruce; Xin Li; Edgar Sanchez – ACT Education Corp., 2025
This report describes the process of developing and validating the enhanced ACT. The report describes the changes made to the test content and the processes by which these design decisions were implemented. The authors describe how they shared the overall scope of the enhancements, including the initial blueprints, with external expert panels,…
Descriptors: College Entrance Examinations, Testing, Change, Test Construction
Baldwin, Peter; Clauser, Brian E. – Journal of Educational Measurement, 2022
While score comparability across test forms typically relies on common (or randomly equivalent) examinees or items, innovations in item formats, test delivery, and efforts to extend the range of score interpretation may require a special data collection before examinees or items can be used in this way--or may be incompatible with common examinee…
Descriptors: Scoring, Testing, Test Items, Test Format
Mo, Ya; Carney, Michele; Cavey, Laurie; Totorica, Tatia – Applied Measurement in Education, 2021
There is a need for assessment items that assess complex constructs but can also be efficiently scored for evaluation of teacher education programs. In an effort to measure the construct of teacher attentiveness in an efficient and scalable manner, we are using exemplar responses elicited by constructed-response item prompts to develop…
Descriptors: Protocol Analysis, Test Items, Responses, Mathematics Teachers
Wolkowitz, Amanda A.; Foley, Brett; Zurn, Jared – Practical Assessment, Research & Evaluation, 2023
The purpose of this study is to introduce a method for converting scored 4-option multiple-choice (MC) items into scored 3-option MC items without re-pretesting the 3-option MC items. This study describes a six-step process for achieving this goal. Data from a professional credentialing exam was used in this study and the method was applied to 24…
Descriptors: Multiple Choice Tests, Test Items, Accuracy, Test Format
Ozdemir, Burhanettin; Gelbal, Selahattin – Education and Information Technologies, 2022
The computerized adaptive tests (CAT) apply an adaptive process in which the items are tailored to individuals' ability scores. The multidimensional CAT (MCAT) designs differ in terms of different item selection, ability estimation, and termination methods being used. This study aims at investigating the performance of the MCAT designs used to…
Descriptors: Scores, Computer Assisted Testing, Test Items, Language Proficiency
Wang, Lin – ETS Research Report Series, 2019
Rearranging response options in different versions of a test of multiple-choice items can be an effective strategy against cheating on the test. This study investigated if rearranging response options would affect item performance and test score comparability. A study test was assembled as the base version from which 3 variant versions were…
Descriptors: Multiple Choice Tests, Test Items, Test Format, Scores
Yulianto, Ahmad; Pudjitriherwanti, Anastasia; Kusumah, Chevy; Oktavia, Dies – International Journal of Language Testing, 2023
The increasing use of computer-based mode in language testing raises concern over its similarities with and differences from paper-based format. The present study aimed to delineate discrepancies between TOEFL PBT and CBT. For that objective, a quantitative method was employed to probe into scores equivalence, the performance of male-female…
Descriptors: Computer Assisted Testing, Test Format, Comparative Analysis, Scores
O'Grady, Stefan – Language Teaching Research, 2023
The current study explores the impact of varying multiple-choice question preview and presentation formats in a test of second language listening proficiency targeting different levels of text comprehension. In a between-participant design, participants completed a 30-item test of listening comprehension featuring implicit and explicit information…
Descriptors: Language Tests, Multiple Choice Tests, Scores, Second Language Learning
Lin, Ye – ProQuest LLC, 2018
With the widespread use of technology in the assessment field, many testing programs use both computer-based tests (CBTs) and paper-and-pencil tests (PPTs). Both the Standards for Educational and Psychological Testing (AERA, APA, & NCME, 2014) and the International Guidelines on Computer-Based and Internet Delivered Testing (International Test…
Descriptors: Computer Assisted Testing, Testing, Student Evaluation, Elementary School Students
Isler, Cemre; Aydin, Belgin – International Journal of Assessment Tools in Education, 2021
This study is about the development and validation process of the Computerized Oral Proficiency Test of English as a Foreign Language (COPTEFL). The test aims at assessing the speaking proficiency levels of students in Anadolu University School of Foreign Languages (AUSFL). For this purpose, three monologic tasks were developed based on the Global…
Descriptors: Test Construction, Construct Validity, Interrater Reliability, Scores
Keng, Leslie; Boyer, Michelle – National Center for the Improvement of Educational Assessment, 2020
ACT requested assistance from the National Center for the Improvement of Educational Assessment (Center for Assessment) to investigate declines of scores for states administering the ACT to its 11th grade students in 2018. This request emerged from conversations among state leaders, the Center for Assessment, and ACT in trying to understand the…
Descriptors: College Entrance Examinations, Scores, Test Score Decline, Educational Trends
Lina Anaya; Nagore Iriberri; Pedro Rey-Biel; Gema Zamarro – Annenberg Institute for School Reform at Brown University, 2021
Standardized assessments are widely used to determine access to educational resources with important consequences for later economic outcomes in life. However, many design features of the tests themselves may lead to psychological reactions influencing performance. In particular, the level of difficulty of the earlier questions in a test may…
Descriptors: Test Construction, Test Wiseness, Test Items, Difficulty Level
Al-Jarf, Reima – Online Submission, 2023
This article aims to give a comprehensive guide to planning and designing vocabulary tests which include Identifying the skills to be covered by the test; outlining the course content covered; preparing a table of specifications that shows the skill, content topics and number of questions allocated to each; and preparing the test instructions. The…
Descriptors: Vocabulary Development, Learning Processes, Test Construction, Course Content
Moon, Jung Aa; Keehner, Madeleine; Katz, Irvin R. – Educational Measurement: Issues and Practice, 2019
The current study investigated how item formats and their inherent affordances influence test-takers' cognition under uncertainty. Adult participants solved content-equivalent math items in multiple-selection multiple-choice and four alternative grid formats. The results indicated that participants' affirmative response tendency (i.e., judge the…
Descriptors: Affordances, Test Items, Test Format, Test Wiseness