Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 3 |
Since 2016 (last 10 years) | 11 |
Since 2006 (last 20 years) | 19 |
Descriptor
Automation | 27 |
Responses | 27 |
Scoring | 10 |
Computer Assisted Testing | 6 |
Foreign Countries | 6 |
Accuracy | 5 |
Test Items | 5 |
Test Format | 4 |
College Students | 3 |
English (Second Language) | 3 |
Test Construction | 3 |
More ▼ |
Source
Author
Heilman, Michael | 2 |
Allen, Laura K. | 1 |
Arbib, M. A. | 1 |
Bai, Lifang | 1 |
Barnes-Holmes, Dermot | 1 |
Barnes-Holmes, Yvonne | 1 |
Bejar, Isaac I. | 1 |
Bennett, Randy Elliot | 1 |
Boles, Shawn | 1 |
Botzer, Assaf | 1 |
Brian E. Clauser | 1 |
More ▼ |
Publication Type
Journal Articles | 17 |
Reports - Research | 14 |
Reports - Descriptive | 4 |
Reports - Evaluative | 2 |
Speeches/Meeting Papers | 2 |
Information Analyses | 1 |
Education Level
Higher Education | 6 |
Postsecondary Education | 4 |
Secondary Education | 2 |
Adult Education | 1 |
Elementary Education | 1 |
Grade 7 | 1 |
Grade 8 | 1 |
Grade 9 | 1 |
High Schools | 1 |
Junior High Schools | 1 |
Middle Schools | 1 |
More ▼ |
Audience
Location
Germany | 2 |
California | 1 |
China | 1 |
China (Beijing) | 1 |
Israel | 1 |
United Kingdom (England) | 1 |
United Kingdom (Northern… | 1 |
Laws, Policies, & Programs
Assessments and Surveys
NEO Five Factor Inventory | 1 |
Program for International… | 1 |
What Works Clearinghouse Rating
Brian E. Clauser; Victoria Yaneva; Peter Baldwin; Le An Ha; Janet Mee – Applied Measurement in Education, 2024
Multiple-choice questions have become ubiquitous in educational measurement because the format allows for efficient and accurate scoring. Nonetheless, there remains continued interest in constructed-response formats. This interest has driven efforts to develop computer-based scoring procedures that can accurately and efficiently score these items.…
Descriptors: Computer Uses in Education, Artificial Intelligence, Scoring, Responses
Fail, Stefanie; Schober, Michael F.; Conrad, Frederick G. – International Journal of Social Research Methodology, 2021
To explore socially desirable responding in telephone surveys, this study examines response latencies in answers to 27 questions in a corpus of 319 audio-recorded voice interviews on iPhones. Response latencies were compared when respondents (a) answered questions on sensitive vs. nonsensitive topics (as classified by online raters); (b) produced…
Descriptors: Telephone Surveys, Handheld Devices, Responses, Interviews
McCarthy, Kathryn S.; Allen, Laura K.; Hinze, Scott R. – Grantee Submission, 2020
Open-ended "constructed responses" promote deeper processing of course materials. Further, evaluation of these explanations can yield important information about students' cognition. This study examined how students' constructed responses, generated at different points during learning, relate to their later comprehension outcomes.…
Descriptors: Reading Comprehension, Prediction, Responses, College Students
Wang, Cong; Liu, Xiufeng; Wang, Lei; Sun, Ying; Zhang, Hongyan – Journal of Science Education and Technology, 2021
Assessing scientific argumentation is one of main challenges in science education. Constructed-response (CR) items can be used to measure the coherence of student ideas and inform science instruction on argumentation. Published research on automated scoring of CR items has been conducted mostly in English writing, rarely in other languages. The…
Descriptors: Automation, Scoring, Accuracy, Responses
Wang, Zhen; Zechner, Klaus; Sun, Yu – Language Testing, 2018
As automated scoring systems for spoken responses are increasingly used in language assessments, testing organizations need to analyze their performance, as compared to human raters, across several dimensions, for example, on individual items or based on subgroups of test takers. In addition, there is a need in testing organizations to establish…
Descriptors: Automation, Scoring, Speech Tests, Language Tests
Zhang, Mo; Chen, Jing; Ruan, Chunyi – ETS Research Report Series, 2016
Successful detection of unusual responses is critical for using machine scoring in the assessment context. This study evaluated the utility of approaches to detecting unusual responses in automated essay scoring. Two research questions were pursued. One question concerned the performance of various prescreening advisory flags, and the other…
Descriptors: Essays, Scoring, Automation, Test Scoring Machines
Sung, Kyung Hee; Noh, Eun Hee; Chon, Kyong Hee – Asia Pacific Education Review, 2017
With increased use of constructed response items in large scale assessments, the cost of scoring has been a major consideration (Noh et al. in KICE Report RRE 2012-6, 2012; Wainer and Thissen in "Applied Measurement in Education" 6:103-118, 1993). In response to the scoring cost issues, various forms of automated system for scoring…
Descriptors: Automation, Scoring, Social Studies, Test Items
Liu, Ou Lydia; Rios, Joseph A.; Heilman, Michael; Gerard, Libby; Linn, Marcia C. – Journal of Research in Science Teaching, 2016
Constructed response items can both measure the coherence of student ideas and serve as reflective experiences to strengthen instruction. We report on new automated scoring technologies that can reduce the cost and complexity of scoring constructed-response items. This study explored the accuracy of c-rater-ML, an automated scoring engine…
Descriptors: Science Tests, Scoring, Automation, Validity
Higgins, Derrick; Heilman, Michael – Educational Measurement: Issues and Practice, 2014
As methods for automated scoring of constructed-response items become more widely adopted in state assessments, and are used in more consequential operational configurations, it is critical that their susceptibility to gaming behavior be investigated and managed. This article provides a review of research relevant to how construct-irrelevant…
Descriptors: Automation, Scoring, Responses, Test Wiseness
Chinkina, Maria; Ruiz, Simón; Meurers, Detmar – Research-publishing.net, 2017
We integrate insights from research in Second Language Acquisition (SLA) and Computational Linguistics (CL) to generate text-based questions. We discuss the generation of wh- questions as functionally-driven input enhancement facilitating the acquisition of particle verbs and report the results of two crowdsourcing studies. The first study shows…
Descriptors: Electronic Publishing, Collaborative Writing, Second Language Learning, Computational Linguistics
Zehner, Fabian; Sälzer, Christine; Goldhammer, Frank – Educational and Psychological Measurement, 2016
Automatic coding of short text responses opens new doors in assessment. We implemented and integrated baseline methods of natural language processing and statistical modelling by means of software components that are available under open licenses. The accuracy of automatic text coding is demonstrated by using data collected in the "Programme…
Descriptors: Educational Assessment, Coding, Automation, Responses
Botzer, Assaf; Meyer, Joachim; Parmet, Yisrael – Journal of Experimental Psychology: Applied, 2013
Binary cueing systems assist in many tasks, often alerting people about potential hazards (such as alarms and alerts). We investigate whether cues, besides possibly improving decision accuracy, also affect the effort users invest in tasks and whether the required effort in tasks affects the responses to cues. We developed a novel experimental tool…
Descriptors: Foreign Countries, College Students, Cues, Validity
Bai, Lifang; Hu, Guangwei – Educational Psychology, 2017
Automated writing evaluation (AWE) systems can provide immediate computer-generated quantitative assessments and qualitative diagnostic feedback on an enormous number of submitted essays. However, limited research attention has been paid to locally designed AWE systems used in English as a foreign language (EFL) classroom contexts. This study…
Descriptors: Computer Assisted Testing, Writing Evaluation, Automation, Essay Tests
Haudek, Kevin C.; Kaplan, Jennifer J.; Knight, Jennifer; Long, Tammy; Merrill, John; Munn, Alan; Nehm, Ross; Smith, Michelle; Urban-Lurain, Mark – CBE - Life Sciences Education, 2011
Concept inventories, consisting of multiple-choice questions designed around common student misconceptions, are designed to reveal student thinking. However, students often have complex, heterogeneous ideas about scientific concepts. Constructed-response assessments, in which students must create their own answer, may better reveal students'…
Descriptors: STEM Education, Student Evaluation, Formative Evaluation, Scientific Concepts
Szalma, James L.; Taylor, Grant S. – Journal of Experimental Psychology: Applied, 2011
This study examined the relationship of operator personality (Five Factor Model) and characteristics of the task and of adaptive automation (reliability and adaptiveness--whether the automation was well-matched to changes in task demand) to operator performance, workload, stress, and coping. This represents the first investigation of how the Five…
Descriptors: Personality Traits, Coping, Automation, Individual Differences
Previous Page | Next Page »
Pages: 1 | 2