Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 4 |
Since 2006 (last 20 years) | 17 |
Descriptor
Evaluators | 18 |
Statistical Analysis | 18 |
Computer Assisted Testing | 12 |
English (Second Language) | 12 |
Second Language Learning | 11 |
Language Tests | 9 |
Correlation | 7 |
Comparative Analysis | 6 |
Essays | 6 |
Foreign Countries | 5 |
Scores | 5 |
More ▼ |
Source
Author
Attali, Yigal | 1 |
Bridgeman, Brent | 1 |
Brown, Michelle Stallone | 1 |
Buzick, Heather | 1 |
Choi, Jeong Hoon | 1 |
Clevinger, Amanda | 1 |
Coniam, David | 1 |
Cramer, Stephen E. | 1 |
Crossley, Scott | 1 |
Davey, Tim | 1 |
Davis, Larry | 1 |
More ▼ |
Publication Type
Journal Articles | 17 |
Reports - Research | 14 |
Reports - Evaluative | 3 |
Tests/Questionnaires | 2 |
Reports - Descriptive | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Higher Education | 8 |
Postsecondary Education | 6 |
Grade 11 | 1 |
Audience
Researchers | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Test of English as a Foreign… | 5 |
Graduate Record Examinations | 1 |
Test of English for… | 1 |
What Works Clearinghouse Rating
Buzick, Heather; Oliveri, Maria Elena; Attali, Yigal; Flor, Michael – Applied Measurement in Education, 2016
Automated essay scoring is a developing technology that can provide efficient scoring of large numbers of written responses. Its use in higher education admissions testing provides an opportunity to collect validity and fairness evidence to support current uses and inform its emergence in other areas such as K-12 large-scale assessment. In this…
Descriptors: Essays, Learning Disabilities, Attention Deficit Hyperactivity Disorder, Scoring
Hicks, Tyler; Rodríguez-Campos, Liliana; Choi, Jeong Hoon – American Journal of Evaluation, 2018
To begin statistical analysis, Bayesians quantify their confidence in modeling hypotheses with priors. A prior describes the probability of a certain modeling hypothesis apart from the data. Bayesians should be able to defend their choice of prior to a skeptical audience. Collaboration between evaluators and stakeholders could make their choices…
Descriptors: Bayesian Statistics, Evaluation Methods, Statistical Analysis, Hypothesis Testing
Davis, Larry – Language Testing, 2016
Two factors were investigated that are thought to contribute to consistency in rater scoring judgments: rater training and experience in scoring. Also considered were the relative effects of scoring rubrics and exemplars on rater performance. Experienced teachers of English (N = 20) scored recorded responses from the TOEFL iBT speaking test prior…
Descriptors: Evaluators, Oral Language, Scores, Language Tests
Crossley, Scott; Clevinger, Amanda; Kim, YouJin – Language Assessment Quarterly, 2014
There has been a growing interest in the use of integrated tasks in the field of second language testing to enhance the authenticity of language tests. However, the role of text integration in test takers' performance has not been widely investigated. The purpose of the current study is to examine the effects of text-based relational (i.e.,…
Descriptors: Language Proficiency, Connected Discourse, Language Tests, English (Second Language)
Rusticus, Shayna A.; Lovato, Chris Y. – Practical Assessment, Research & Evaluation, 2011
Assessing the comparability of different groups is an issue facing many researchers and evaluators in a variety of settings. Commonly, null hypothesis significance testing (NHST) is incorrectly used to demonstrate comparability when a non-significant result is found. This is problematic because a failure to find a difference between groups is not…
Descriptors: Medical Education, Evaluators, Intervals, Testing
Hoang, Giang Thi Linh; Kunnan, Antony John – Language Assessment Quarterly, 2016
Computer technology made its way into writing instruction and assessment with spelling and grammar checkers decades ago, but more recently it has done so with automated essay evaluation (AEE) and diagnostic feedback. And although many programs and tools have been developed in the last decade, not enough research has been conducted to support or…
Descriptors: Case Studies, Essays, Writing Evaluation, English (Second Language)
Schmid, Monika S.; Hopp, Holger – Language Testing, 2014
This study examines the methodology of global foreign accent ratings in studies on L2 speech production. In three experiments, we test how variation in raters, range within speech samples, as well as instructions and procedures affects ratings of accent in predominantly monolingual speakers of German, non-native speakers of German, as well as…
Descriptors: Comparative Analysis, Second Language Learning, Pronunciation, Native Speakers
Sydorenko, Tetyana; Maynard, Carson; Guntly, Erin – TESL Canada Journal, 2014
The criteria by which raters judge pragmatic appropriateness of language learners' speech acts are underexamined, especially when raters evaluate extended discourse. To shed more light on this process, the present study investigated what factors are salient to raters when scoring pragmatic appropriateness of extended request sequences, and which…
Descriptors: Evaluators, Discourse Analysis, Pragmatics, Evaluation Criteria
Geluso, Joe – Computer Assisted Language Learning, 2013
Usage-based theories of language learning suggest that native speakers of a language are acutely aware of formulaic language due in large part to frequency effects. Corpora and data-driven learning can offer useful insights into frequent patterns of naturally occurring language to second/foreign language learners who, unlike native speakers, are…
Descriptors: Native Speakers, English (Second Language), Search Engines, Second Language Learning
Jeong, Heejeong – Language Testing, 2013
Language assessment courses (LACs) are taught by professionals who have majored in the area of language testing (language testers or LTs), but also by others who come from different language-related majors (non-language testers, non-LTs). Different language assessment courses may be developed, depending on who teaches the course and the…
Descriptors: Language Tests, Courses, Teacher Education, Teacher Educators
Ramineni, Chaitanya; Trapani, Catherine S.; Williamson, David M.; Davey, Tim; Bridgeman, Brent – ETS Research Report Series, 2012
Scoring models for the "e-rater"® system were built and evaluated for the "TOEFL"® exam's independent and integrated writing prompts. Prompt-specific and generic scoring models were built, and evaluation statistics, such as weighted kappas, Pearson correlations, standardized differences in mean scores, and correlations with…
Descriptors: Scoring, Prompting, Evaluators, Computer Software
Jamieson, Joan; Poonpon, Kornwipa – ETS Research Report Series, 2013
Research and development of a new type of scoring rubric for the integrated speaking tasks of "TOEFL iBT"® are described. These "analytic rating guides" could be helpful if tasks modeled after those in TOEFL iBT were used for formative assessment, a purpose which is different from TOEFL iBT's primary use for admission…
Descriptors: Oral Language, Language Proficiency, Scaling, Scores
Qian, David D. – Language Assessment Quarterly, 2009
In recent decades, with an increasing application of computer technology to the delivery of oral language proficiency assessment, there have been renewed debates over the appropriateness of two different testing modes, namely, (a) face-to-face, or direct, testing, and (b) person-to-machine, or semi-direct, testing. Previous research conducted in…
Descriptors: Oral Language, Testing, Computers, Foreign Countries
Coniam, David – Educational Research and Evaluation, 2009
This paper describes a study comparing paper-based marking (PBM) and onscreen marking (OSM) in Hong Kong utilising English language essay scripts drawn from the live 2007 Hong Kong Certificate of Education Examination (HKCEE) Year 11 English Language Writing Paper. In the study, 30 raters from the 2007 HKCEE Writing Paper marked on paper 100…
Descriptors: Student Attitudes, Foreign Countries, Essays, Comparative Analysis
Glew, David; Meyer, Tracy; Sawyer, Becky; Schuhmann, Pete; Wray, Barry – Journal of Effective Teaching, 2011
Business schools are often criticized for the inadequate writing skills of their graduates. Improving writing skills involves first understanding the current skill level of students. This research attempts to provide insights into the effectiveness of the current method of assessing writing skills in a school of business at a large regional…
Descriptors: Undergraduate Students, Business Administration Education, Business Schools, Writing Skills
Previous Page | Next Page »
Pages: 1 | 2