NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers2
Laws, Policies, & Programs
No Child Left Behind Act 20011
What Works Clearinghouse Rating
Showing 91 to 105 of 171 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Koys, Daniel – Journal of Education for Business, 2010
The author found that the GPA at the end of the MBA program is most accurately predicted by the Graduate Management Admission Test (GMAT) and the Test of English as a Foreign Language (TOEFL). MBA GPA is also predicted, though less accurately, by the Scholastic Level Exam, a mathematics test, undergraduate GPA, and previous career progression. If…
Descriptors: Grade Point Average, Predictive Validity, Prediction, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Tao, Yu-Hui; Wu, Yu-Lung; Chang, Hsin-Yi – Educational Technology & Society, 2008
Computer adaptive testing (CAT) is theoretically sound and efficient, and is commonly seen in larger testing programs. It is, however, rarely seen in a smaller-scale scenario, such as in classrooms or business daily routines, because of the complexity of most adopted Item Response Theory (IRT) models. While the Sequential Probability Ratio Test…
Descriptors: Computer Assisted Testing, Adaptive Testing, Item Response Theory, English (Second Language)
Witt, Autumn Song – ProQuest LLC, 2010
This dissertation follows an oral language assessment tool from initial design and implementation to validity analysis. The specialized variables of this study are the population: international teaching assistants and the purpose: spoken assessment as a hiring prerequisite. However, the process can easily be applied to other populations and…
Descriptors: Language Skills, Academic Discourse, Oral Language, Predictive Validity
Sawaki, Yasuyo; Nissan, Susan – Educational Testing Service, 2009
The study investigated the criterion-related validity of the "Test of English as a Foreign Language"[TM] Internet-based test (TOEFL[R] iBT) Listening section by examining its relationship to a criterion measure designed to reflect language-use tasks that university students encounter in everyday academic life: listening to academic…
Descriptors: Test Validity, Language Tests, English (Second Language), Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Jang, Eunice Eunhee; Roussos, Louis – International Journal of Testing, 2009
In this article we present results of a Differential Item Functioning (DIF) study using Shealy and Stout's (1993) multidimensionality-based DIF analysis framework. In this framework, differences in test score distributions across different groups of examinees may be a result of multidimensionality if secondary dimensions (not the primary dimension…
Descriptors: Test Bias, Vocabulary, English (Second Language), Scores
Quinlan, Thomas; Higgins, Derrick; Wolff, Susanne – Educational Testing Service, 2009
This report evaluates the construct coverage of the e-rater[R[ scoring engine. The matter of construct coverage depends on whether one defines writing skill, in terms of process or product. Originally, the e-rater engine consisted of a large set of components with a proven ability to predict human holistic scores. By organizing these capabilities…
Descriptors: Guides, Writing Skills, Factor Analysis, Writing Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Roemer, Ann – College and University, 2002
Describes the Test of English as a Foreign Language (TOEFL) and the Advanced Placement in International English Language (APIEL) and evaluates both tests on three basic types of validity criteria: content, construct, and criterion-related. Concludes that the TOEFL has serious limitations, and that the APIEL may be more useful. (EV)
Descriptors: Construct Validity, Content Validity, English (Second Language), Foreign Students
Peer reviewed Peer reviewed
Direct linkDirect link
Sawaki, Yasuyo; Stricker, Lawrence J.; Oranje, Andreas H. – Language Testing, 2009
This construct validation study investigated the factor structure of the Test of English as a Foreign Language[TM] Internet-based test (TOEFL[R] iBT). An item-level confirmatory factor analysis was conducted for a test form completed by participants in a field study. A higher-order factor model was identified, with a higher-order general factor…
Descriptors: Speech Communication, Construct Validity, Factor Structure, Factor Analysis
Heil, Donald K.; Aleamoni, Lawrence M. – 1974
The grades which foreign students receive are not always based on the same criteria as the grades assigned to native American students. The use of standardized test scores provides a common data base from which to evaluate the relative proficiency level of foreign students. This study examines the Test of English as a Foreign Language (TOEFL) and…
Descriptors: English (Second Language), Foreign Students, Graduate Students, Language Ability
Garcia Laborda, Jesus – European Association for Computer-Assisted Language Learning (EUROCALL), 2008
In recent years, the Educational Testing System organisation has developed two models of the computer-based Test of English as a Foreign Language (TOEFL). However, the computerization of the test has shown a number of problems according to the testees' origin. This paper suggests some of these problems after conducting short interviews with four…
Descriptors: Computer Assisted Testing, High Stakes Tests, Higher Education, Foreign Countries
Peer reviewed Peer reviewed
Boldt, Robert F. – Language Testing, 1992
The assumption called PIRC (proportional item response curve) was tested in which PIRC was used to predict item scores of selected examinees on selected items. Findings show approximate accuracies of prediction for PIRC, the three-parameter logist model, and a modified Rasch model. (12 references) (Author/LB)
Descriptors: Comparative Analysis, English (Second Language), Factor Analysis, Item Response Theory
Buell, James G. – 1992
This paper discusses research conducted in the spring of 1991 that measured the relationship of reading subtest scores to teacher ratings of students' reading abilities. Sixty-eight advanced-level students in an intensive English program took an institutional version of the Test of English as a Foreign Language (TOEFL) and a specimen reading…
Descriptors: Advanced Students, Content Validity, English (Second Language), Language Proficiency
Peer reviewed Peer reviewed
Schmitt, Norbert – Language Testing, 1999
One way of determining construct validity of vocabulary items in language tests is to interview subjects directly after taking the items to ascertain what is known about the target words in question. This approach was combined within the framework of lexical competency in a study of the behavior of lexical items on the Test of English as a Foreign…
Descriptors: Associative Learning, Construct Validity, English (Second Language), Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Qian, David D. – Language Assessment Quarterly, 2008
In the last 15 years or so, language testing practitioners have increasingly favored assessing vocabulary in context. The discrete-point vocabulary measure used in the old version of the Test of English as a Foreign Language (TOEFL) has long been criticized for encouraging test candidates to memorize wordlists out of context although test items…
Descriptors: Predictive Validity, Context Effect, Vocabulary, English (Second Language)
Peer reviewed Peer reviewed
Direct linkDirect link
Xi, Xiaoming – Language Assessment Quarterly, 2007
Although the primary use of the speaking section of the Test of English as a Foreign Language Internet-based test (TOEFL[R] iBT Speaking) is to inform admissions decisions at English medium universities, it may also be useful as an initial screening measure for international teaching assistants (ITAs). This study provides criterion-related…
Descriptors: Test Validity, Speech Tests, Language Tests, English (Second Language)
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  12