Publication Date
In 2025 | 1 |
Since 2024 | 4 |
Since 2021 (last 5 years) | 15 |
Descriptor
Responses | 15 |
Test Format | 15 |
Test Items | 10 |
Scoring | 7 |
Mathematics Tests | 5 |
Artificial Intelligence | 4 |
Computer Assisted Testing | 4 |
Problem Solving | 4 |
Foreign Countries | 3 |
Middle School Students | 3 |
Multiple Choice Tests | 3 |
More ▼ |
Source
Author
Guo, Wenjing | 2 |
Wind, Stefanie A. | 2 |
A. Lopez, Alexis | 1 |
Achim Goerres | 1 |
Allan S. Cohen | 1 |
Baral, Sami | 1 |
Bingölbali, Erhan | 1 |
Bingölbali, Ferhan | 1 |
Botelho, Anthony | 1 |
Brian E. Clauser | 1 |
Carney, Michele | 1 |
More ▼ |
Publication Type
Journal Articles | 13 |
Reports - Research | 13 |
Reports - Descriptive | 2 |
Collected Works - General | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Audience
Location
Europe | 1 |
Germany | 1 |
New Jersey | 1 |
Turkey | 1 |
Laws, Policies, & Programs
Assessments and Surveys
National Assessment of… | 2 |
Trends in International… | 1 |
What Works Clearinghouse Rating
Stefanie A. Wind; Yuan Ge – Measurement: Interdisciplinary Research and Perspectives, 2024
Mixed-format assessments made up of multiple-choice (MC) items and constructed response (CR) items that are scored using rater judgments include unique psychometric considerations. When these item types are combined to estimate examinee achievement, information about the psychometric quality of each component can depend on that of the other. For…
Descriptors: Interrater Reliability, Test Bias, Multiple Choice Tests, Responses
Brian E. Clauser; Victoria Yaneva; Peter Baldwin; Le An Ha; Janet Mee – Applied Measurement in Education, 2024
Multiple-choice questions have become ubiquitous in educational measurement because the format allows for efficient and accurate scoring. Nonetheless, there remains continued interest in constructed-response formats. This interest has driven efforts to develop computer-based scoring procedures that can accurately and efficiently score these items.…
Descriptors: Computer Uses in Education, Artificial Intelligence, Scoring, Responses
Jiawei Xiong; George Engelhard; Allan S. Cohen – Measurement: Interdisciplinary Research and Perspectives, 2025
It is common to find mixed-format data results from the use of both multiple-choice (MC) and constructed-response (CR) questions on assessments. Dealing with these mixed response types involves understanding what the assessment is measuring, and the use of suitable measurement models to estimate latent abilities. Past research in educational…
Descriptors: Responses, Test Items, Test Format, Grade 8
Jan Karem Höhne; Achim Goerres – International Journal of Social Research Methodology, 2024
The measurement of political solidarities and related concepts is an important endeavor in numerous scientific disciplines, such as political and social science research. European surveys, such as the Eurobarometer, frequently measure these concepts for people's home country and Europe raising questions with respect to the order of precedence.…
Descriptors: Surveys, Attitude Measures, Political Attitudes, Foreign Countries
Gerd Kortemeyer – Physical Review Physics Education Research, 2023
Solving problems is crucial for learning physics, and not only final solutions but also their derivations are important. Grading these derivations is labor intensive, as it generally involves human evaluation of handwritten work. AI tools have not been an alternative, since even for short answers, they needed specific training for each problem or…
Descriptors: Artificial Intelligence, Problem Solving, Physics, Introductory Courses
Guo, Wenjing; Wind, Stefanie A. – Journal of Educational Measurement, 2021
The use of mixed-format tests made up of multiple-choice (MC) items and constructed response (CR) items is popular in large-scale testing programs, including the National Assessment of Educational Progress (NAEP) and many district- and state-level assessments in the United States. Rater effects, or raters' scoring tendencies that result in…
Descriptors: Test Format, Multiple Choice Tests, Scoring, Test Items
Wind, Stefanie A.; Guo, Wenjing – Educational Assessment, 2021
Scoring procedures for the constructed-response (CR) items in large-scale mixed-format educational assessments often involve checks for rater agreement or rater reliability. Although these analyses are important, researchers have documented rater effects that persist despite rater training and that are not always detected in rater agreement and…
Descriptors: Scoring, Responses, Test Items, Test Format
Mo, Ya; Carney, Michele; Cavey, Laurie; Totorica, Tatia – Applied Measurement in Education, 2021
There is a need for assessment items that assess complex constructs but can also be efficiently scored for evaluation of teacher education programs. In an effort to measure the construct of teacher attentiveness in an efficient and scalable manner, we are using exemplar responses elicited by constructed-response item prompts to develop…
Descriptors: Protocol Analysis, Test Items, Responses, Mathematics Teachers
Wang, Yu; Chiu, Chia-Yi; Köhn, Hans Friedrich – Journal of Educational and Behavioral Statistics, 2023
The multiple-choice (MC) item format has been widely used in educational assessments across diverse content domains. MC items purportedly allow for collecting richer diagnostic information. The effectiveness and economy of administering MC items may have further contributed to their popularity not just in educational assessment. The MC item format…
Descriptors: Multiple Choice Tests, Nonparametric Statistics, Test Format, Educational Assessment
Baral, Sami; Botelho, Anthony; Santhanam, Abhishek; Gurung, Ashish; Cheng, Li; Heffernan, Neil – International Educational Data Mining Society, 2023
Teachers often rely on the use of a range of open-ended problems to assess students' understanding of mathematical concepts. Beyond traditional conceptions of student open-ended work, commonly in the form of textual short-answer or essay responses, the use of figures, tables, number lines, graphs, and pictographs are other examples of open-ended…
Descriptors: Mathematics Instruction, Mathematical Concepts, Problem Solving, Test Format
Bingölbali, Erhan; Bingölbali, Ferhan – International Journal of Progressive Education, 2021
This study explores the affordances that the open-ended questions hold in comparison with those of closed-ended questions through examining 6th grade students' performance on a mathematics test. For this purpose, a questionnaire including 2 open-ended and 2 closed-ended questions was applied to 36 6th grade students. The questions were prepared in…
Descriptors: Questioning Techniques, Test Format, Test Items, Responses
Lynch, Sarah – Practical Assessment, Research & Evaluation, 2022
In today's digital age, tests are increasingly being delivered on computers. Many of these computer-based tests (CBTs) have been adapted from paper-based tests (PBTs). However, this change in mode of test administration has the potential to introduce construct-irrelevant variance, affecting the validity of score interpretations. Because of this,…
Descriptors: Computer Assisted Testing, Tests, Scores, Scoring
A. Lopez, Alexis – Journal of Latinos and Education, 2023
In this study, I examined how 34 Spanish-speaking English language learners (ELLs) used their linguistic resources (English and Spanish) and language modes (oral and written language) to demonstrate their knowledge of proportional reasoning in a dual language mathematics assessment task. The assessment allows students to see the item in both…
Descriptors: Spanish Speaking, English Language Learners, Language Usage, Mathematics Instruction
Lopez, Alexis A.; Guzman-Orth, Danielle; Zapata-Rivera, Diego; Forsyth, Carolyn M.; Luce, Christine – ETS Research Report Series, 2021
Substantial progress has been made toward applying technology enhanced conversation-based assessments (CBAs) to measure the English-language proficiency of English learners (ELs). CBAs are conversation-based systems that use conversations among computer-animated agents and a test taker. We expanded the design and capability of prior…
Descriptors: Accuracy, English Language Learners, Language Proficiency, Language Tests
Mullis, Ina V. S., Ed.; Martin, Michael O., Ed.; von Davier, Matthias, Ed. – International Association for the Evaluation of Educational Achievement, 2021
TIMSS (Trends in International Mathematics and Science Study) is a long-standing international assessment of mathematics and science at the fourth and eighth grades that has been collecting trend data every four years since 1995. About 70 countries use TIMSS trend data for monitoring the effectiveness of their education systems in a global…
Descriptors: Achievement Tests, International Assessment, Science Achievement, Mathematics Achievement