NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Type
Reports - Research14
Tests/Questionnaires14
Journal Articles12
Numerical/Quantitative Data1
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 14 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Choi, Yun Deok – Language Testing in Asia, 2022
A much-debated question in the L2 assessment field is if computer familiarity should be considered a potential source of construct-irrelevant variance in computer-based writing (CBW) tests. This study aims to make a partial validity argument for an online source-based writing test (OSWT) designed for English placement testing (EPT), focusing on…
Descriptors: Test Validity, Scores, Computer Assisted Testing, English (Second Language)
Peer reviewed Peer reviewed
Direct linkDirect link
Dan Song; Alexander F. Tang – Language Learning & Technology, 2025
While many studies have addressed the benefits of technology-assisted L2 writing, limited research has delved into how generative artificial intelligence (GAI) supports students in completing their writing tasks in Mandarin Chinese. In this study, 26 university-level Mandarin Chinese foreign language students completed two writing tasks on two…
Descriptors: Artificial Intelligence, Second Language Learning, Standardized Tests, Writing Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Ling, Guangming – Language Assessment Quarterly, 2017
To investigate whether the type of keyboard used in exams introduces any construct-irrelevant variance to the TOEFL iBT Writing scores, we surveyed 17,040 TOEFL iBT examinees from 24 countries on their keyboard-related perceptions and preferences and analyzed the survey responses together with their test scores. Results suggest that controlling…
Descriptors: English (Second Language), Language Tests, Second Language Learning, Writing Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Owada, Kazuharu – Journal of Pan-Pacific Association of Applied Linguistics, 2017
There are some English verbs that can be used both intransitively and transitively. Verbs such as "break," "close," and "melt" can appear in intransitive active, transitive active, and passive constructions. Although native English speakers know in what kind of context a target verb is used in a certain construction,…
Descriptors: Foreign Countries, Undergraduate Students, Second Language Learning, English (Second Language)
Peer reviewed Peer reviewed
Direct linkDirect link
Jin, Yan; Yan, Ming – Language Assessment Quarterly, 2017
One major threat to validity in high-stakes testing is construct-irrelevant variance. In this study we explored whether the transition from a paper-and-pencil to a computer-based test mode in a high-stakes test in China, the College English Test, has brought about variance irrelevant to the construct being assessed in this test. Analyses of the…
Descriptors: Writing Tests, Computer Assisted Testing, Computer Literacy, Construct Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Khuder, Baraa; Harwood, Nigel – Written Communication, 2019
This mixed-methods study investigates writers' task representation and the factors affecting it in test-like and non-test-like conditions. Five advanced-level L2 writers wrote two argumentative essays each, one in test-like conditions and the other in non-test-like conditions where the participants were allowed to use all the time and online…
Descriptors: Second Language Learning, Task Analysis, Advanced Students, Essays
Peer reviewed Peer reviewed
Direct linkDirect link
Zou, Xiao-Ling; Chen, Yan-Min – Technology, Pedagogy and Education, 2016
The effects of computer and paper test media on EFL test-takers with different computer familiarity in writing scores and in the cognitive writing process have been comprehensively explored from the learners' aspect as well as on the basis of related theories and practice. The results indicate significant differences in test scores among the…
Descriptors: English (Second Language), Second Language Learning, Second Language Instruction, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Ling, Guangming – International Journal of Testing, 2016
To investigate possible iPad related mode effect, we tested 403 8th graders in Indiana, Maryland, and New Jersey under three mode conditions through random assignment: a desktop computer, an iPad alone, and an iPad with an external keyboard. All students had used an iPad or computer for six months or longer. The 2-hour test included reading, math,…
Descriptors: Educational Testing, Computer Assisted Testing, Handheld Devices, Computers
White, Sheida; Kim, Young Yee; Chen, Jing; Liu, Fei – National Center for Education Statistics, 2015
This study examined whether or not fourth-graders could fully demonstrate their writing skills on the computer and factors associated with their performance on the National Assessment of Educational Progress (NAEP) computer-based writing assessment. The results suggest that high-performing fourth-graders (those who scored in the upper 20 percent…
Descriptors: National Competency Tests, Computer Assisted Testing, Writing Tests, Grade 4
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Zhang, Mo; Breyer, F. Jay; Lorenz, Florian – ETS Research Report Series, 2013
In this research, we investigated the suitability of implementing "e-rater"® automated essay scoring in a high-stakes large-scale English language testing program. We examined the effectiveness of generic scoring and 2 variants of prompt-based scoring approaches. Effectiveness was evaluated on a number of dimensions, including agreement…
Descriptors: Computer Assisted Testing, Computer Software, Scoring, Language Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Aydin, Selami – Turkish Online Journal of Educational Technology - TOJET, 2006
This research aimed to investigate the effect of computers on the test and inter-rater reliability of writing test scores of ESL learners. Writing samples of 20 pen-paper and 20 computer group students were scored in analytic scoring method by two scorers, and then the scores were analyzed in Alpha (Cronbach) model. The results showed that the…
Descriptors: Foreign Countries, College Students, Computer Assisted Testing, English (Second Language)
Mazzeo, John; And Others – 1991
Two studies investigated the comparability of scores from paper-and-pencil and computer-administered versions of the College-Level Examination Program (CLEP) General Examinations in mathematics and English composition. The first study used a prototype computer-administered version on each examination for 94 students for mathematics and 116 for…
Descriptors: College Entrance Examinations, College Students, Comparative Testing, Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Yu, Lei; Livingston, Samuel A.; Larkin, Kevin C.; Bonett, John – ETS Research Report Series, 2004
This study compared essay scores from paper-based and computer-based versions of a writing test for prospective teachers. Scores for essays in the paper-based version averaged nearly half a standard deviation higher than those in the computer-based version, after applying a statistical control for demographic differences between the groups of…
Descriptors: Essays, Writing (Composition), Computer Assisted Testing, Technology Uses in Education
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lee, Yong-Won; Breland, Hunter; Muraki, Eiji – ETS Research Report Series, 2004
This study has investigated the comparability of computer-based testing (CBT) writing prompts in the Test of English as a Foreign Language™ (TOEFL®) for examinees of different native language backgrounds. A total of 81 writing prompts introduced from July 1998 through August 2000 were examined using a three-step logistic regression procedure for…
Descriptors: English (Second Language), Language Tests, Second Language Learning, Computer Assisted Testing