NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
Every Student Succeeds Act…2
What Works Clearinghouse Rating
Showing all 15 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Swapna Haresh Teckwani; Amanda Huee-Ping Wong; Nathasha Vihangi Luke; Ivan Cherh Chiet Low – Advances in Physiology Education, 2024
The advent of artificial intelligence (AI), particularly large language models (LLMs) like ChatGPT and Gemini, has significantly impacted the educational landscape, offering unique opportunities for learning and assessment. In the realm of written assessment grading, traditionally viewed as a laborious and subjective process, this study sought to…
Descriptors: Accuracy, Reliability, Computational Linguistics, Standards
Peer reviewed Peer reviewed
Direct linkDirect link
Nadolski, Rob J.; Hummel, Hans G. K.; Rusman, Ellen; Ackermans, Kevin – Educational Technology Research and Development, 2021
Acquiring complex oral presentation skills is cognitively demanding for students and demands intensive teacher guidance. The aim of this study was twofold: (a) to identify and apply design guidelines in developing an effective formative assessment method for oral presentation skills during classroom practice, and (b) to develop and compare two…
Descriptors: Formative Evaluation, Oral Language, Scoring Rubrics, Design
Peer reviewed Peer reviewed
Direct linkDirect link
Passonneau, Rebecca J.; Poddar, Ananya; Gite, Gaurav; Krivokapic, Alisa; Yang, Qian; Perin, Dolores – International Journal of Artificial Intelligence in Education, 2018
Development of reliable rubrics for educational intervention studies that address reading and writing skills is labor-intensive, and could benefit from an automated approach. We compare a main ideas rubric used in a successful writing intervention study to a highly reliable wise-crowd content assessment method developed to evaluate…
Descriptors: Computer Assisted Testing, Writing Evaluation, Content Analysis, Scoring Rubrics
Peer reviewed Peer reviewed
Direct linkDirect link
Vandeweerd, Nathan; Housen, Alex; Paquot, Magali – Language Testing, 2023
This study investigates whether re-thinking the separation of lexis and grammar in language testing could lead to more valid inferences about proficiency across modes. As argued by Römer, typical scoring rubrics ignore important information about proficiency encoded at the lexis-grammar interface, in particular how the co-selection of lexical and…
Descriptors: French, Language Tests, Grammar, Second Language Learning
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ariamanesh, Ali A.; Barati, Hossein; Youhanaee, Manijeh – International TESOL Journal, 2022
The present study investigated the speaking module of TOEFL iBT with an emphasis on the dichotomy of independent and integrated tasks. The potential differences between the two speaking conditions were intended to be explored based on the oral performance elicited from a group of Iranian test takers. To collect the required data, a simulated…
Descriptors: Second Language Learning, English (Second Language), Language Tests, Computer Assisted Testing
Hoda Ahmed Yousry Mohammed – Online Submission, 2024
The aim of the current study is to investigate the effect of using life wide learning approach on developing English language Fluency for prep. stage students in the official Language Schools. The study adopted the one-group pre-post experimental design, with a pre-post fluency test along with a triangulation method that integrates both…
Descriptors: English (Second Language), Second Language Learning, Language Fluency, Learning Processes
Jennifer Francois – ProQuest LLC, 2021
While recent research has shown that multimodal feedback and use of formulaic sequences (FS) are effective in improving student performance on writing tasks, there is a dearth of studies on the impact of these aspects on computer-based academic speaking assessments. This dissertation seeks to fill this gap by examining the impact of both…
Descriptors: Language Tests, Computer Assisted Testing, Second Language Learning, Second Language Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Shermis, Mark D.; Lottridge, Sue; Mayfield, Elijah – Journal of Educational Measurement, 2015
This study investigated the impact of anonymizing text on predicted scores made by two kinds of automated scoring engines: one that incorporates elements of natural language processing (NLP) and one that does not. Eight data sets (N = 22,029) were used to form both training and test sets in which the scoring engines had access to both text and…
Descriptors: Scoring, Essays, Computer Assisted Testing, Natural Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
Llosa, Lorena; Malone, Margaret E. – Language Testing, 2019
Investigating the comparability of students' performance on TOEFL writing tasks and actual academic writing tasks is essential to provide backing for the extrapolation inference in the TOEFL validity argument (Chapelle, Enright, & Jamieson, 2008). This study compared 103 international non-native-English-speaking undergraduate students'…
Descriptors: Computer Assisted Testing, Language Tests, English (Second Language), Second Language Learning
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Darling-Hammond, Linda – Learning Policy Institute, 2017
After passage of the Every Student Succeeds Act (ESSA) in 2015, states assumed greater responsibility for designing their own accountability and assessment systems. ESSA requires states to measure "higher order thinking skills and understanding" and encourages the use of open-ended performance assessments, which are essential for…
Descriptors: Performance Based Assessment, Accountability, Portfolios (Background Materials), Task Analysis
Darling-Hammond, Linda – Council of Chief State School Officers, 2017
The Every Student Succeeds Act (ESSA) opened up new possibilities for how student and school success are defined and supported in American public education. States have greater responsibility for designing and building their assessment and accountability systems. These new opportunities to develop performance assessments are critically important…
Descriptors: Performance Based Assessment, Accountability, Portfolios (Background Materials), Task Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Ventouras, Errikos; Triantis, Dimos; Tsiakas, Panagiotis; Stergiopoulos, Charalampos – Computers & Education, 2011
The aim of the present research was to compare the use of multiple-choice questions (MCQs) as an examination method against the oral examination (OE) method. MCQs are widely used and their importance seems likely to grow, due to their inherent suitability for electronic assessment. However, MCQs are influenced by the tendency of examinees to guess…
Descriptors: Grades (Scholastic), Scoring, Multiple Choice Tests, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Anglin, Linda; Anglin, Kenneth; Schumann, Paul L.; Kaliski, John A. – Decision Sciences Journal of Innovative Education, 2008
This study tests the use of computer-assisted grading rubrics compared to other grading methods with respect to the efficiency and effectiveness of different grading processes for subjective assignments. The test was performed on a large Introduction to Business course. The students in this course were randomly assigned to four treatment groups…
Descriptors: Scoring Rubrics, Grading, Computer Assisted Testing, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Kelly, P. Adam – Journal of Educational Computing Research, 2005
Powers, Burstein, Chodorow, Fowles, and Kukich (2002) suggested that automated essay scoring (AES) may benefit from the use of "general" scoring models designed to score essays irrespective of the prompt for which an essay was written. They reasoned that such models may enhance score credibility by signifying that an AES system measures the same…
Descriptors: Essays, Models, Writing Evaluation, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Koul, Ravinder; Clariana, Roy B.; Salehi, Roya – Journal of Educational Computing Research, 2005
This article reports the results of an investigation of the convergent criterion-related validity of two computer-based tools for scoring concept maps and essays as part of the ongoing formative evaluation of these tools. In pairs, participants researched a science topic online and created a concept map of the topic. Later, participants…
Descriptors: Scoring, Essay Tests, Test Validity, Formative Evaluation