NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 136 to 150 of 632 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Cox, Troy L.; Brown, Alan V.; Thompson, Gregory L. – Language Testing, 2023
The rating of proficiency tests that use the Inter-agency Roundtable (ILR) and American Council on the Teaching of Foreign Languages (ACTFL) guidelines claims that each major level is based on hierarchal linguistic functions that require mastery of multidimensional traits in such a way that each level subsumes the levels beneath it. These…
Descriptors: Oral Language, Language Fluency, Scoring, Cues
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Jones, Daniel Marc; Cheng, Liying; Tweedie, M. Gregory – Canadian Journal of Learning and Technology, 2022
This article reviews recent literature (2011-present) on the automated scoring (AS) of writing and speaking. Its purpose is to first survey the current research on automated scoring of language, then highlight how automated scoring impacts the present and future of assessment, teaching, and learning. The article begins by outlining the general…
Descriptors: Automation, Computer Assisted Testing, Scoring, Writing (Composition)
Peer reviewed Peer reviewed
Direct linkDirect link
Shermis, Mark D.; Lottridge, Sue; Mayfield, Elijah – Journal of Educational Measurement, 2015
This study investigated the impact of anonymizing text on predicted scores made by two kinds of automated scoring engines: one that incorporates elements of natural language processing (NLP) and one that does not. Eight data sets (N = 22,029) were used to form both training and test sets in which the scoring engines had access to both text and…
Descriptors: Scoring, Essays, Computer Assisted Testing, Natural Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
Seifried, Jürgen; Brandt, Steffen; Kögler, Kristina; Rausch, Andreas – Cogent Education, 2020
Problem-solving competence is an important requirement in back offices across different industries. Thus, the assessment of problem-solving competence has become an important issue in learning and instruction in vocational and professional contexts. We developed a computer-based low-stakes assessment of problem-solving competence in the domain of…
Descriptors: Foreign Countries, Vocational Education, Student Evaluation, Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Magliano, Joseph P.; Lampi, Jodi P.; Ray, Melissa; Chan, Greta – Grantee Submission, 2020
Coherent mental models for successful comprehension require inferences that establish semantic "bridges" between discourse constituents and "elaborations" that incorporate relevant background knowledge. While it is established that individual differences in the extent to which postsecondary students engage in these processes…
Descriptors: Reading Comprehension, Reading Strategies, Inferences, Reading Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Daniels, Paul – TESL-EJ, 2022
This paper compares the speaking scores generated by two online systems that are designed to automatically grade student speech and provide personalized speaking feedback in an EFL context. The first system, "Speech Assessment for Moodle" ("SAM"), is an open-source solution developed by the author that makes use of Google's…
Descriptors: Speech Communication, Auditory Perception, Computer Uses in Education, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Conijn, Rianne; Martinez-Maldonado, Roberto; Knight, Simon; Buckingham Shum, Simon; Van Waes, Luuk; van Zaanen, Menno – Computer Assisted Language Learning, 2022
Current writing support tools tend to focus on assessing final or intermediate products, rather than the writing process. However, sensing technologies, such as keystroke logging, can enable provision of automated feedback during, and on aspects of, the writing process. Despite this potential, little is known about the critical indicators that can…
Descriptors: Automation, Feedback (Response), Writing Evaluation, Learning Analytics
Peer reviewed Peer reviewed
Direct linkDirect link
Dalton, Sarah Grace; Stark, Brielle C.; Fromm, Davida; Apple, Kristen; MacWhinney, Brian; Rensch, Amanda; Rowedder, Madyson – Journal of Speech, Language, and Hearing Research, 2022
Purpose: The aim of this study was to advance the use of structured, monologic discourse analysis by validating an automated scoring procedure for core lexicon (CoreLex) using transcripts. Method: Forty-nine transcripts from persons with aphasia and 48 transcripts from persons with no brain injury were retrieved from the AphasiaBank database. Five…
Descriptors: Validity, Discourse Analysis, Databases, Scoring
Beula M. Magimairaj; Philip Capin; Sandra L. Gillam; Sharon Vaughn; Greg Roberts; Anna-Maria Fall; Ronald B. Gillam – Grantee Submission, 2022
Purpose: Our aim was to evaluate the psychometric properties of the online administered format of the Test of Narrative Language--Second Edition (TNL-2; Gillam & Pearson, 2017), given the importance of assessing children's narrative ability and considerable absence of psychometric studies of spoken language assessments administered online.…
Descriptors: Computer Assisted Testing, Language Tests, Story Telling, Language Impairments
Li, Haiying; Cai, Zhiqiang; Graesser, Arthur – Grantee Submission, 2018
In this study we developed and evaluated a crowdsourcing-based latent semantic analysis (LSA) approach to computerized summary scoring (CSS). LSA is a frequently used mathematical component in CSS, where LSA similarity represents the extent to which the to-be-graded target summary is similar to a model summary or a set of exemplar summaries.…
Descriptors: Computer Assisted Testing, Scoring, Semantics, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Nadolski, Rob J.; Hummel, Hans G. K.; Rusman, Ellen; Ackermans, Kevin – Educational Technology Research and Development, 2021
Acquiring complex oral presentation skills is cognitively demanding for students and demands intensive teacher guidance. The aim of this study was twofold: (a) to identify and apply design guidelines in developing an effective formative assessment method for oral presentation skills during classroom practice, and (b) to develop and compare two…
Descriptors: Formative Evaluation, Oral Language, Scoring Rubrics, Design
Peer reviewed Peer reviewed
Direct linkDirect link
Mao, Liyang; Liu, Ou Lydia; Roohr, Katrina; Belur, Vinetha; Mulholland, Matthew; Lee, Hee-Sun; Pallant, Amy – Educational Assessment, 2018
Scientific argumentation is one of the core practices for teachers to implement in science classrooms. We developed a computer-based formative assessment to support students' construction and revision of scientific arguments. The assessment is built upon automated scoring of students' arguments and provides feedback to students and teachers.…
Descriptors: Computer Assisted Testing, Science Tests, Scoring, Automation
Peer reviewed Peer reviewed
Direct linkDirect link
Scoular, Claire; Care, Esther – Educational Assessment, 2019
Recent educational and psychological research has highlighted shifting workplace requirements and change required to equip the emerging workforce with skills for the 21st century. The emergence of these highlights the issues, and drives the importance, of new methods of assessment. This study addresses some of the issues by describing a scoring…
Descriptors: Cooperation, Problem Solving, Scoring, 21st Century Skills
Peer reviewed Peer reviewed
Direct linkDirect link
Cetin-Berber, Dee Duygu; Sari, Halil Ibrahim; Huggins-Manley, Anne Corinne – Educational and Psychological Measurement, 2019
Routing examinees to modules based on their ability level is a very important aspect in computerized adaptive multistage testing. However, the presence of missing responses may complicate estimation of examinee ability, which may result in misrouting of individuals. Therefore, missing responses should be handled carefully. This study investigated…
Descriptors: Computer Assisted Testing, Adaptive Testing, Error of Measurement, Research Problems
Peer reviewed Peer reviewed
Direct linkDirect link
Dixon-Román, Ezekiel; Nichols, T. Philip; Nyame-Mensah, Ama – Learning, Media and Technology, 2020
In this article, we examine the sociopolitical implications of AI technologies as they are integrated into writing instruction and assessment. Drawing from new materialist and Black feminist thought, we consider how learning analytics platforms for writing are animated by and through entanglements of algorithmic reasoning, state standards and…
Descriptors: Racial Bias, Artificial Intelligence, Educational Technology, Writing Instruction
Pages: 1  |  ...  |  6  |  7  |  8  |  9  |  10  |  11  |  12  |  13  |  14  |  ...  |  43