NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
Every Student Succeeds Act…2
What Works Clearinghouse Rating
Showing 1 to 15 of 28 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Dalton, Sarah Grace; Stark, Brielle C.; Fromm, Davida; Apple, Kristen; MacWhinney, Brian; Rensch, Amanda; Rowedder, Madyson – Journal of Speech, Language, and Hearing Research, 2022
Purpose: The aim of this study was to advance the use of structured, monologic discourse analysis by validating an automated scoring procedure for core lexicon (CoreLex) using transcripts. Method: Forty-nine transcripts from persons with aphasia and 48 transcripts from persons with no brain injury were retrieved from the AphasiaBank database. Five…
Descriptors: Validity, Discourse Analysis, Databases, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Han, Chao; Xiao, Xiaoyan – Language Testing, 2022
The quality of sign language interpreting (SLI) is a gripping construct among practitioners, educators and researchers, calling for reliable and valid assessment. There has been a diverse array of methods in the extant literature to measure SLI quality, ranging from traditional error analysis to recent rubric scoring. In this study, we want to…
Descriptors: Comparative Analysis, Sign Language, Deaf Interpreting, Evaluators
Peer reviewed Peer reviewed
Direct linkDirect link
Park, Ryoungsun; Kim, Jiseon; Chung, Hyewon; Dodd, Barbara G. – Educational and Psychological Measurement, 2017
The current study proposes novel methods to predict multistage testing (MST) performance without conducting simulations. This method, called MST test information, is based on analytic derivation of standard errors of ability estimates across theta levels. We compared standard errors derived analytically to the simulation results to demonstrate the…
Descriptors: Testing, Performance, Prediction, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Jiao, Yishan; LaCross, Amy; Berisha, Visar; Liss, Julie – Journal of Speech, Language, and Hearing Research, 2019
Purpose: Subjective speech intelligibility assessment is often preferred over more objective approaches that rely on transcript scoring. This is, in part, because of the intensive manual labor associated with extracting objective metrics from transcribed speech. In this study, we propose an automated approach for scoring transcripts that provides…
Descriptors: Suprasegmentals, Phonemes, Error Patterns, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Shinhye; Winke, Paula – Language Testing, 2018
We investigated how young language learners process their responses on and perceive a computer-mediated, timed speaking test. Twenty 8-, 9-, and 10-year-old non-native English-speaking children (NNSs) and eight same-aged, native English-speaking children (NSs) completed seven computerized sample TOEFL® Primary™ speaking test tasks. We investigated…
Descriptors: Elementary School Students, Second Language Learning, Responses, Computer Assisted Testing
Nash, Brooke L. – ProQuest LLC, 2012
While significant progress has been made in recent years on technology enabled assessments (TEAs), including assessment systems that incorporate scaffolding into the assessment process, there is a dearth of research regarding psychometric scoring models that can be used to fully capture students' knowledge, skills and abilities as measured by…
Descriptors: Scoring, Scaffolding (Teaching Technique), Computer Assisted Testing, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Yarnell, Jordy B.; Pfeiffer, Steven I. – Journal of Psychoeducational Assessment, 2015
The present study examined the psychometric equivalence of administering a computer-based version of the Gifted Rating Scale (GRS) compared with the traditional paper-and-pencil GRS-School Form (GRS-S). The GRS-S is a teacher-completed rating scale used in gifted assessment. The GRS-Electronic Form provides an alternative method of administering…
Descriptors: Gifted, Psychometrics, Rating Scales, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Massey, Chris L.; Gambrell, Linda B. – Literacy Research and Instruction, 2014
Literacy educators and researchers have long recognized the importance of increasing students' writing proficiency across age and grade levels. With the release of the Common Core State Standards (CCSS), a new and greater emphasis is being placed on writing in the K-12 curriculum. Educators, as well as the authors of the CCSS, agree that…
Descriptors: Writing Evaluation, State Standards, Instructional Effectiveness, Writing Ability
Peer reviewed Peer reviewed
Direct linkDirect link
Suzuki, Yuichi; DeKeyser, Robert – Language Learning, 2015
The present study challenges the validity of elicited imitation (EI) as a measure for implicit knowledge, investigating to what extent online error detection and subsequent sentence repetition draw on implicit knowledge. To assess online detection during listening, a word monitoring component was built into an EI task. Advanced-level Japanese L2…
Descriptors: Comparative Analysis, Validity, Second Language Learning, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Harik, Polina; Baldwin, Peter; Clauser, Brian – Applied Psychological Measurement, 2013
Growing reliance on complex constructed response items has generated considerable interest in automated scoring solutions. Many of these solutions are described in the literature; however, relatively few studies have been published that "compare" automated scoring strategies. Here, comparisons are made among five strategies for…
Descriptors: Computer Assisted Testing, Automation, Scoring, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Delen, Erhan – EURASIA Journal of Mathematics, Science & Technology Education, 2015
As technology has become more advanced and accessible in instructional settings, there has been an upward trend in computer-based testing in the last decades. The present experimental study examines students' behaviors during computer-based testing in two different conditions and explores how these conditions affect the test results. Results…
Descriptors: Foreign Countries, Computer Assisted Testing, Student Behavior, Test Results
Peer reviewed Peer reviewed
Direct linkDirect link
Ramineni, Chaitanya – Assessing Writing, 2013
In this paper, I describe the design and evaluation of automated essay scoring (AES) models for an institution's writing placement program. Information was gathered on admitted student writing performance at a science and technology research university in the northeastern United States. Under timed conditions, first-year students (N = 879) were…
Descriptors: Validity, Comparative Analysis, Internet, Student Placement
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Darling-Hammond, Linda – Learning Policy Institute, 2017
After passage of the Every Student Succeeds Act (ESSA) in 2015, states assumed greater responsibility for designing their own accountability and assessment systems. ESSA requires states to measure "higher order thinking skills and understanding" and encourages the use of open-ended performance assessments, which are essential for…
Descriptors: Performance Based Assessment, Accountability, Portfolios (Background Materials), Task Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ghilay, Yaron; Ghilay, Ruth – Journal of Educational Technology, 2012
The study examined advantages and disadvantages of computerised assessment compared to traditional evaluation. It was based on two samples of college students (n=54) being examined in computerised tests instead of paper-based exams. Students were asked to answer a questionnaire focused on test effectiveness, experience, flexibility and integrity.…
Descriptors: Student Evaluation, Higher Education, Comparative Analysis, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Condon, William – Assessing Writing, 2013
Automated Essay Scoring (AES) has garnered a great deal of attention from the rhetoric and composition/writing studies community since the Educational Testing Service began using e-rater[R] and the "Criterion"[R] Online Writing Evaluation Service as products in scoring writing tests, and most of the responses have been negative. While the…
Descriptors: Measurement, Psychometrics, Evaluation Methods, Educational Testing
Previous Page | Next Page »
Pages: 1  |  2