NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 7 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Burton, J. Dylan – Language Assessment Quarterly, 2023
The effects of question or task complexity on second language speaking have traditionally been investigated using complexity, accuracy, and fluency measures. Response processes in speaking tests, however, may manifest in other ways, such as through nonverbal behavior. Eye behavior, in the form of averted gaze or blinking frequency, has been found…
Descriptors: Oral Language, Speech Communication, Language Tests, Eye Movements
Peer reviewed Peer reviewed
Direct linkDirect link
Ling, Guangming – Language Assessment Quarterly, 2017
To investigate whether the type of keyboard used in exams introduces any construct-irrelevant variance to the TOEFL iBT Writing scores, we surveyed 17,040 TOEFL iBT examinees from 24 countries on their keyboard-related perceptions and preferences and analyzed the survey responses together with their test scores. Results suggest that controlling…
Descriptors: English (Second Language), Language Tests, Second Language Learning, Writing Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Burton, John Dylan – Language Assessment Quarterly, 2020
An assumption underlying speaking tests is that scores reflect the ability to produce online, non-rehearsed speech. Speech produced in testing situations may, however, be less spontaneous if extensive test preparation takes place, resulting in memorized or rehearsed responses. If raters detect these patterns, they may conceptualize speech as…
Descriptors: Language Tests, Oral Language, Scores, Speech Communication
Peer reviewed Peer reviewed
Direct linkDirect link
Jin, Yan; Yan, Ming – Language Assessment Quarterly, 2017
One major threat to validity in high-stakes testing is construct-irrelevant variance. In this study we explored whether the transition from a paper-and-pencil to a computer-based test mode in a high-stakes test in China, the College English Test, has brought about variance irrelevant to the construct being assessed in this test. Analyses of the…
Descriptors: Writing Tests, Computer Assisted Testing, Computer Literacy, Construct Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Yu, Guoxing; Zhang, Jing – Language Assessment Quarterly, 2017
In this special issue on high-stakes English language testing in China, the two articles on computer-based testing (Jin & Yan; He & Min) highlight a number of consistent, ongoing challenges and concerns in the development and implementation of the nationwide IB-CET (Internet Based College English Test) and institutional computer-adaptive…
Descriptors: Foreign Countries, Computer Assisted Testing, English (Second Language), Language Tests
Peer reviewed Peer reviewed
Direct linkDirect link
He, Lianzhen; Min, Shangchao – Language Assessment Quarterly, 2017
The first aim of this study was to develop a computer adaptive EFL test (CALT) that assesses test takers' listening and reading proficiency in English with dichotomous items and polytomous testlets. We reported in detail on the development of the CALT, including item banking, determination of suitable item response theory (IRT) models for item…
Descriptors: Computer Assisted Testing, Adaptive Testing, English (Second Language), Second Language Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Yu, Guoxing – Language Assessment Quarterly, 2010
Comparability studies on computer- and paper-based reading tests have focused on short texts and selected-response items via almost exclusively statistical modeling of test performance. The psychological effects of presentation mode and computer familiarity on individual students are under-researched. In this study, 157 students read extended…
Descriptors: Reading Tests, Familiarity, Computer Assisted Testing, English (Second Language)