NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 8 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Yong-Won; Gentile, Claudia; Kantor, Robert – Applied Linguistics, 2010
The main purpose of the study was to investigate the distinctness and reliability of analytic (or multi-trait) rating dimensions and their relationships to holistic scores and "e-rater"[R] essay feature variables in the context of the TOEFL[R] computer-based test (TOEFL CBT) writing assessment. Data analyzed in the study were holistic…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Essays
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lee, Yong-Won; Gentile, Claudia; Kantor, Robert – ETS Research Report Series, 2008
The main purpose of the study was to investigate the distinctness and reliability of analytic (or multitrait) rating dimensions and their relationships to holistic scores and "e-rater"® essay feature variables in the context of the TOEFL® computer-based test (CBT) writing assessment. Data analyzed in the study were analytic and holistic…
Descriptors: English (Second Language), Language Tests, Second Language Learning, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Breland, Hunter; Lee, Yong-Won – Applied Measurement in Education, 2007
The objective of the present investigation was to examine the comparability of writing prompts for different gender groups in the context of the computer-based Test of English as a Foreign Language[TM] (TOEFL[R]-CBT). A total of 87 prompts administered from July 1998 through March 2000 were analyzed. An extended version of logistic regression for…
Descriptors: Learning Theories, Writing Evaluation, Writing Tests, Second Language Learning
Lee, Yong-Won; Breland, Hunter; Muraki, Eiji – 2002
Since the writing section of the Test of English as a Foreign Language (TOEFL) computer based test (CBT) is a single-prompt essay test, it is very important to ensure that each prompt is as fair as possible to any subgroups of examinees, such as those with different native language backgrounds. A particular topic of interest in this study is the…
Descriptors: Comparative Analysis, Computer Assisted Testing, English (Second Language), Essay Tests
Breland, Hunter; Lee, Yong-Won; Najarian, Michelle; Muraki, Eiji – Educational Testing Service, 2004
This investigation of the comparability of writing assessment prompts was conducted in two phases. In an exploratory Phase I, 47 writing prompts administered in the computer-based Test of English as a Foreign Language[TM] (TOEFL[R] CBT) from July through December 1998 were examined. Logistic regression procedures were used to estimate prompt…
Descriptors: Writing Evaluation, Quality Control, Gender Differences, Writing Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Breland, Hunter; Lee, Yong-Won; Muraki, Eiji – ETS Research Report Series, 2004
Eighty-three Test of English as a Foreign Language™ (TOEFL®) CBT writing prompts that were administered between July 1998 and August 2000 were examined in order to identify differences in scores that could be attributed to the response mode chosen by examinees (handwritten or word processed). Differences were examined statistically using…
Descriptors: English (Second Language), Language Tests, Second Language Learning, Cues
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lee, Yong-Won; Breland, Hunter; Muraki, Eiji – ETS Research Report Series, 2004
This study has investigated the comparability of computer-based testing (CBT) writing prompts in the Test of English as a Foreign Language™ (TOEFL®) for examinees of different native language backgrounds. A total of 81 writing prompts introduced from July 1998 through August 2000 were examined using a three-step logistic regression procedure for…
Descriptors: English (Second Language), Language Tests, Second Language Learning, Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Stricker, Lawrence J.; Rock, Donald A.; Lee, Yong-Won – ETS Research Report Series, 2005
This study assessed the factor structure of the LanguEdge™ test and the invariance of its factors across language groups. Confirmatory factor analyses of individual tasks and subsets of items in the four sections of the test, Listening, Reading, Speaking, and Writing, was carried out for Arabic-, Chinese-, and Spanish-speaking test takers. Two…
Descriptors: Factor Structure, Language Tests, Factor Analysis, Semitic Languages