NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 13 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Uchihara, Takumi; Clenton, Jon – Language Learning Journal, 2023
Research has suggested the important role of vocabulary knowledge in second language (L2) speaking proficiency. However, earlier studies tended to disregard the congruence in test format between assessing vocabulary knowledge and speaking skills with the former predominantly measured in written format. The current study measured vocabulary…
Descriptors: Vocabulary Development, Second Language Learning, Second Language Instruction, Language Proficiency
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Hyeonah Kang – Journal of Second Language Acquisition and Teaching, 2022
Using a lexical decision task, Wolter and Yamashita (2015) showed that collocations that exist only in L1 but not in L2 were not processed faster than collocations that only exist in L2 but not in L1 or a random combination of two words. This result seems to support the age/order of acquisition effects (Carroll & White, 1973) over Jiang's…
Descriptors: Language Processing, Phrase Structure, Language Usage, Decision Making
Peer reviewed Peer reviewed
Direct linkDirect link
King, Rosemary; Blayney, Paul; Sweller, John – Accounting Education, 2021
This study offers evidence of the impact of language background on the performance of students enrolled in an accounting study unit. It aims to quantify the effects of language background on performance in essay questions, compared to calculation questions requiring an application of procedures. Marks were collected from 2850 students. The results…
Descriptors: Cognitive Ability, Accounting, Native Language, Second Language Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Kaya, Elif; O'Grady, Stefan; Kalender, Ilker – Language Testing, 2022
Language proficiency testing serves an important function of classifying examinees into different categories of ability. However, misclassification is to some extent inevitable and may have important consequences for stakeholders. Recent research suggests that classification efficacy may be enhanced substantially using computerized adaptive…
Descriptors: Item Response Theory, Test Items, Language Tests, Classification
Peer reviewed Peer reviewed
Direct linkDirect link
Ahmadi, Alireza; Sadeghi, Elham – Language Assessment Quarterly, 2016
In the present study we investigated the effect of test format on oral performance in terms of test scores and discourse features (accuracy, fluency, and complexity). Moreover, we explored how the scores obtained on different test formats relate to such features. To this end, 23 Iranian EFL learners participated in three test formats of monologue,…
Descriptors: Oral Language, Comparative Analysis, Language Fluency, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Arthur, Neal; Everaert, Patricia – Accounting Education, 2012
This paper addresses the question of whether the increasing use of multiple-choice questions will favour particular student groups, i.e. male or female students. Using data from Belgium, this paper empirically examines the existence of a gender effect by comparing the relative performance of male and female students in both multiple-choice and…
Descriptors: Accounting, Business Administration Education, Gender Differences, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Pellicer-Sanchez, Ana; Schmitt, Norbert – Language Testing, 2012
Despite a number of research studies investigating the Yes-No vocabulary test format, one main question remains unanswered: What is the best scoring procedure to adjust for testee overestimation of vocabulary knowledge? Different scoring methodologies have been proposed based on the inclusion and selection of nonwords in the test. However, there…
Descriptors: Language Tests, Scoring, Reaction Time, Vocabulary Development
Peer reviewed Peer reviewed
Pinsoneault, Terry B. – Computers in Human Behavior, 1996
Computer-assisted and paper-and-pencil-administered formats for the Minnesota Multiphasic Personality Inventories were investigated. Subjects were 32 master's and doctoral-level counseling students. Findings indicated that the two formats were comparable and that students preferred the computer-assisted format. (AEF)
Descriptors: Comparative Analysis, Computer Assisted Testing, Graduate Students, Higher Education
Peer reviewed Peer reviewed
Direct linkDirect link
Swartz, Stephen M. – Journal of Education for Business, 2006
The confidence level (information-referenced testing; IRT) design is an attempt to improve upon the multiple choice format by allowing students to express a level of confidence in the answers they choose. In this study, the author evaluated student perceptions of the ease of use and accuracy of and general preference for traditional multiple…
Descriptors: Multiple Choice Tests, Essay Tests, Graduate Students, Student Attitudes
Peer reviewed Peer reviewed
Direct linkDirect link
Xu, Yuejin; Iran-Nejad, Asghar; Thoma, Stephen J. – Journal of Interactive Online Learning, 2007
The purpose of the study was to determine comparability of an online version to the original paper-pencil version of Defining Issues Test 2 (DIT2). This study employed methods from both Classical Test Theory (CTT) and Item Response Theory (IRT). Findings from CTT analyses supported the reliability and discriminant validity of both versions.…
Descriptors: Computer Assisted Testing, Test Format, Comparative Analysis, Test Theory
Geranpayeh, Ardeshir – Edinburgh Working Papers in Applied Linguistics, 1994
This paper reports on a study conducted to determine if comparisons between scores on the Test of English as a Foreign Language (TOEFL) and the International English Language Testing Service (IELTS) are justifiable. The test scores of 216 Iranian graduate students who took the TOEFL and IELTS, as well as the Iranian Ministry of Culture and Higher…
Descriptors: Comparative Analysis, English (Second Language), Foreign Countries, Graduate Students
Peer reviewed Peer reviewed
Hancock, Gregory R. – Journal of Experimental Education, 1994
To investigate the ability of multiple-choice tests to assess higher order thinking skills, examinations were constructed as half multiple choice and half constructed response. Results with 90 undergraduate and graduate students indicate that the 2 formats measure similar constructs at different levels of complexity. (SLD)
Descriptors: Cognitive Processes, Comparative Analysis, Constructed Response, Educational Assessment
Perkins, Bob – 1993
This research investigated whether computer anxiety is different if the measure is administered by computer rather than by paper and pencil. The study compared two groups of students (N=83) who were gathered from three undergraduate sections and one graduate section of a required computer class for in-service and pre-service teachers using anxiety…
Descriptors: Anxiety, Comparative Analysis, Computer Assisted Instruction, Computer Assisted Testing