NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 286 to 300 of 510 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sukkarieh, Jane Z.; von Davier, Matthias; Yamamoto, Kentaro – ETS Research Report Series, 2012
This document describes a solution to a problem in the automatic content scoring of the multilingual character-by-character highlighting item type. This solution is language independent and represents a significant enhancement. This solution not only facilitates automatic scoring but plays an important role in clustering students' responses;…
Descriptors: Scoring, Multilingualism, Test Items, Role
Peer reviewed Peer reviewed
Direct linkDirect link
Deevy, Patricia; Weil, Lisa Wisman; Leonard, Laurence B.; Goffman, Lisa – Language, Speech, and Hearing Services in Schools, 2010
Purpose: The purpose of this study was to assess the diagnostic accuracy of the Nonword Repetition Test (NRT; Dollaghan & Campbell, 1998) using a sample of 4- and 5-year-olds with and without specific language impairment (SLI) and to evaluate its feasibility for use in universal screening. Method: The NRT was administered to 29 children with SLI…
Descriptors: Phonemes, Language Impairments, Scoring, Probability
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ramineni, Chaitanya; Trapani, Catherine S.; Williamson, David M.; Davey, Tim; Bridgeman, Brent – ETS Research Report Series, 2012
Scoring models for the "e-rater"® system were built and evaluated for the "TOEFL"® exam's independent and integrated writing prompts. Prompt-specific and generic scoring models were built, and evaluation statistics, such as weighted kappas, Pearson correlations, standardized differences in mean scores, and correlations with…
Descriptors: Scoring, Prompting, Evaluators, Computer Software
Morris, Allison – OECD Publishing (NJ1), 2011
This report discusses the most relevant issues concerning student standardised testing in which there are no-stakes for students ("standardised testing") through a literature review and a review of the trends in standardised testing in OECD countries. Unlike standardised tests in which there are high-stakes for students, no-stakes implies that…
Descriptors: Standardized Tests, Testing, Educational Trends, Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Bridgeman, Brent; Powers, Donald; Stone, Elizabeth; Mollaun, Pamela – Language Testing, 2012
Scores assigned by trained raters and by an automated scoring system (SpeechRater[TM]) on the speaking section of the TOEFL iBT[TM] were validated against a communicative competence criterion. Specifically, a sample of 555 undergraduate students listened to speech samples from 184 examinees who took the Test of English as a Foreign Language…
Descriptors: Undergraduate Students, Speech Communication, Rating Scales, Scoring
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Zhang, Mo; Breyer, F. Jay; Lorenz, Florian – ETS Research Report Series, 2013
In this research, we investigated the suitability of implementing "e-rater"® automated essay scoring in a high-stakes large-scale English language testing program. We examined the effectiveness of generic scoring and 2 variants of prompt-based scoring approaches. Effectiveness was evaluated on a number of dimensions, including agreement…
Descriptors: Computer Assisted Testing, Computer Software, Scoring, Language Tests
Haberman, Shelby J. – Educational Testing Service, 2011
Alternative approaches are discussed for use of e-rater[R] to score the TOEFL iBT[R] Writing test. These approaches involve alternate criteria. In the 1st approach, the predicted variable is the expected rater score of the examinee's 2 essays. In the 2nd approach, the predicted variable is the expected rater score of 2 essay responses by the…
Descriptors: Writing Tests, Scoring, Essays, Language Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Ventouras, Errikos; Triantis, Dimos; Tsiakas, Panagiotis; Stergiopoulos, Charalampos – Computers & Education, 2011
The aim of the present research was to compare the use of multiple-choice questions (MCQs) as an examination method against the oral examination (OE) method. MCQs are widely used and their importance seems likely to grow, due to their inherent suitability for electronic assessment. However, MCQs are influenced by the tendency of examinees to guess…
Descriptors: Grades (Scholastic), Scoring, Multiple Choice Tests, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Darrah, Marjorie; Fuller, Edgar; Miller, David – Journal of Computers in Mathematics and Science Teaching, 2010
This paper discusses a possible solution to a problem frequently encountered by educators seeking to use computer-based or multiple choice-based exams for mathematics. These assessment methodologies force a discrete grading system on students and do not allow for the possibility of partial credit. The research presented in this paper investigates…
Descriptors: College Students, College Mathematics, Calculus, Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Shahnazari-Dorcheh, Mohammadtaghi; Roshan, Saeed – English Language Teaching, 2012
Due to the lack of span test for the use in language-specific and cross-language studies, this study provides L1 and L2 researchers with a reliable language-independent span test (math span test) for the measurement of working memory capacity. It also describes the development, validation, and scoring method of this test. This test included 70…
Descriptors: Language Research, Native Language, Second Language Learning, Scoring
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Liao, Chen-Huei; Kuo, Bor-Chen; Pai, Kai-Chih – Turkish Online Journal of Educational Technology - TOJET, 2012
Automated scoring by means of Latent Semantic Analysis (LSA) has been introduced lately to improve the traditional human scoring system. The purposes of the present study were to develop a LSA-based assessment system to evaluate children's Chinese sentence construction skills and to examine the effectiveness of LSA-based automated scoring function…
Descriptors: Foreign Countries, Program Effectiveness, Scoring, Personality
Peer reviewed Peer reviewed
Direct linkDirect link
Bejar, Isaac I. – Assessment in Education: Principles, Policy & Practice, 2011
Automated scoring of constructed responses is already operational in several testing programmes. However, as the methodology matures and the demand for the utilisation of constructed responses increases, the volume of automated scoring is likely to increase at a fast pace. Quality assurance and control of the scoring process will likely be more…
Descriptors: Evidence, Quality Control, Scoring, Quality Assurance
Peer reviewed Peer reviewed
Direct linkDirect link
Rosen, Yigel, Ed.; Ferrara, Steve, Ed.; Mosharraf, Maryam, Ed. – IGI Global, 2016
Education is expanding to include a stronger focus on the practical application of classroom lessons in an effort to prepare the next generation of scholars for a changing world economy centered on collaborative and problem-solving skills for the digital age. "The Handbook of Research on Technology Tools for Real-World Skill Development"…
Descriptors: Technological Literacy, Technology Uses in Education, Problem Solving, Skill Development
Polikoff, Morgan S. – Center for American Progress, 2014
The Common Core State Standards (CCSS) were created in response to the shortcomings of No Child Left Behind era standards and assessments. Among those failings were the poor quality of content standards and assessments and the variability in content expectations and proficiency targets across states, as well as concerns related to the economic…
Descriptors: Common Core State Standards, Educational Legislation, Federal Legislation, Elementary Secondary Education
Peer reviewed Peer reviewed
Direct linkDirect link
McCurry, Doug – Assessing Writing, 2010
This article considers the claim that machine scoring of writing test responses agrees with human readers as much as humans agree with other humans. These claims about the reliability of machine scoring of writing are usually based on specific and constrained writing tasks, and there is reason for asking whether machine scoring of writing requires…
Descriptors: Writing Tests, Scoring, Interrater Reliability, Computer Assisted Testing
Pages: 1  |  ...  |  16  |  17  |  18  |  19  |  20  |  21  |  22  |  23  |  24  |  ...  |  34