Publication Date
In 2025 | 2 |
Since 2024 | 4 |
Since 2021 (last 5 years) | 11 |
Since 2016 (last 10 years) | 16 |
Since 2006 (last 20 years) | 43 |
Descriptor
Source
Author
Wainer, Howard | 5 |
Stocking, Martha L. | 4 |
Algina, James | 2 |
Auchter, Joan E. | 2 |
Downing, Steven M. | 2 |
Haladyna, Thomas M. | 2 |
Hanson, Bradley A. | 2 |
Kim, Sooyeon | 2 |
Lawrence, Ida M. | 2 |
Martinez, Michael E. | 2 |
Pae, Tae-Il | 2 |
More ▼ |
Publication Type
Education Level
Higher Education | 14 |
Elementary Secondary Education | 13 |
Secondary Education | 7 |
Grade 8 | 5 |
Postsecondary Education | 5 |
Grade 4 | 3 |
Elementary Education | 2 |
Grade 5 | 2 |
Middle Schools | 2 |
Grade 1 | 1 |
Grade 10 | 1 |
More ▼ |
Location
United States | 4 |
Australia | 2 |
Netherlands | 2 |
Oregon | 2 |
South Korea | 2 |
United Kingdom | 2 |
Belgium | 1 |
California | 1 |
China | 1 |
Germany | 1 |
Hungary | 1 |
More ▼ |
Laws, Policies, & Programs
Individuals with Disabilities… | 1 |
No Child Left Behind Act 2001 | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
van der Linden, Wim J. – Journal of Educational and Behavioral Statistics, 2022
The current literature on test equating generally defines it as the process necessary to obtain score comparability between different test forms. The definition is in contrast with Lord's foundational paper which viewed equating as the process required to obtain comparability of measurement scale between forms. The distinction between the notions…
Descriptors: Equated Scores, Test Items, Scores, Probability
Gustafsson, Martin; Barakat, Bilal Fouad – Comparative Education Review, 2023
International assessments inform education policy debates, yet little is known about their floor effects: To what extent do they fail to differentiate between the lowest performers, and what are the implications of this? TIMSS, SACMEQ, and LLECE data are analyzed to answer this question. In TIMSS, floor effects have been reduced through the…
Descriptors: Achievement Tests, Elementary Secondary Education, International Assessment, Foreign Countries
Emma Walland – Research Matters, 2024
GCSE examinations (taken by students aged 16 years in England) are not intended to be speeded (i.e. to be partly a test of how quickly students can answer questions). However, there has been little research exploring this. The aim of this research was to explore the speededness of past GCSE written examinations, using only the data from scored…
Descriptors: Educational Change, Test Items, Item Analysis, Scoring
Goran Trajkovski; Heather Hayes – Digital Education and Learning, 2025
This book explores the transformative role of artificial intelligence in educational assessment, catering to researchers, educators, administrators, policymakers, and technologists involved in shaping the future of education. It delves into the foundations of AI-assisted assessment, innovative question types and formats, data analysis techniques,…
Descriptors: Artificial Intelligence, Educational Assessment, Computer Uses in Education, Test Format
Selcuk Acar; Yuyang Shen – Journal of Creative Behavior, 2025
Creativity tests, like creativity itself, vary widely in their structure and use. These differences include instructions, test duration, environments, prompt and response modalities, and the structure of test items. A key factor is task structure, referring to the specificity of the number of responses requested for a given prompt. Classic…
Descriptors: Creativity, Creative Thinking, Creativity Tests, Task Analysis
Steedle, Jeffrey T.; Cho, Young Woo; Wang, Shichao; Arthur, Ann M.; Li, Dongmei – Educational Measurement: Issues and Practice, 2022
As testing programs transition from paper to online testing, they must study mode comparability to support the exchangeability of scores from different testing modes. To that end, a series of three mode comparability studies was conducted during the 2019-2020 academic year with examinees randomly assigned to take the ACT college admissions exam on…
Descriptors: College Entrance Examinations, Computer Assisted Testing, Scores, Test Format
Lawrence T. DeCarlo – Educational and Psychological Measurement, 2024
A psychological framework for different types of items commonly used with mixed-format exams is proposed. A choice model based on signal detection theory (SDT) is used for multiple-choice (MC) items, whereas an item response theory (IRT) model is used for open-ended (OE) items. The SDT and IRT models are shown to share a common conceptualization…
Descriptors: Test Format, Multiple Choice Tests, Item Response Theory, Models
László Kojanitz – Hungarian Educational Research Journal, 2023
In 2005 the Hungarian school-leaving examination system underwent a significant transformation. In case of history the aim was to give a greater role to the development of students' knowledge acquisition and source analysis skills by more focusing on students' work with historical sources in classes. However, it was clear that the achievement of…
Descriptors: Foreign Countries, Exit Examinations, Minimum Competency Testing, Test Content
Wolkowitz, Amanda A.; Foley, Brett; Zurn, Jared – Practical Assessment, Research & Evaluation, 2023
The purpose of this study is to introduce a method for converting scored 4-option multiple-choice (MC) items into scored 3-option MC items without re-pretesting the 3-option MC items. This study describes a six-step process for achieving this goal. Data from a professional credentialing exam was used in this study and the method was applied to 24…
Descriptors: Multiple Choice Tests, Test Items, Accuracy, Test Format
Rivas, Axel; Scasso, Martín Guillermo – Journal of Education Policy, 2021
Since 2000, the PISA test implemented by OECD has become the prime benchmark for international comparisons in education. The 2015 PISA edition introduced methodological changes that altered the nature of its results. PISA made no longer valid non-reached items of the final part of the test, assuming that those unanswered questions were more a…
Descriptors: Test Validity, Computer Assisted Testing, Foreign Countries, Achievement Tests
Loudon, Catherine; Macias-Muñoz, Aide – Advances in Physiology Education, 2018
Different versions of multiple-choice exams were administered to an undergraduate class in human physiology as part of normal testing in the classroom. The goal was to evaluate whether the number of options (possible answers) per question influenced the effectiveness of this assessment. Three exams (each with three versions) were given to each of…
Descriptors: Multiple Choice Tests, Test Construction, Test Items, Science Tests
Bryant, William – Practical Assessment, Research & Evaluation, 2017
As large-scale standardized tests move from paper-based to computer-based delivery, opportunities arise for test developers to make use of items beyond traditional selected and constructed response types. Technology-enhanced items (TEIs) have the potential to provide advantages over conventional items, including broadening construct measurement,…
Descriptors: Standardized Tests, Test Items, Computer Assisted Testing, Test Format
National Academies Press, 2022
The National Assessment of Educational Progress (NAEP) -- often called "The Nation's Report Card" -- is the largest nationally representative and continuing assessment of what students in public and private schools in the United States know and can do in various subjects and has provided policy makers and the public with invaluable…
Descriptors: Costs, Futures (of Society), National Competency Tests, Educational Trends
He, Chunxiu – International Journal of Higher Education, 2019
Given the practical significance of vocabulary testing in language teaching and the theoretical foundations of developing a vocabulary test, four well-established vocabulary tests are introduced for diagnostic purpose together with their corresponding validation studies, with a focus on the designed purpose, the selection of the items, the…
Descriptors: Vocabulary Development, Language Tests, Validity, Test Format
Cutumisu, Maria; Adams, Cathy; Lu, Chang – Journal of Science Education and Technology, 2019
Computational thinking (CT) is regarded as an essential twenty-first century competency and it is already embedded in K-12 curricula across the globe. However, research on assessing CT has lagged, with few assessments being implemented and validated. Moreover, there is a lack of systematic grouping of CT assessments. This scoping review examines…
Descriptors: Computation, Thinking Skills, 21st Century Skills, Elementary Secondary Education