NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 19 results Save | Export
Soohye Yeom – ProQuest LLC, 2023
With the wide introduction of English-medium instruction (EMI) to higher education institutions throughout East Asian countries, many East Asian universities are using English proficiency tests that were not originally designed for this context to make admissions and placement decisions. To support the use of these tests in this new EMI context,…
Descriptors: English (Second Language), Language Tests, Second Language Learning, Writing Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kyle, Kristopher; Choe, Ann Tai; Eguchi, Masaki; LaFlair, Geoff; Ziegler, Nicole – ETS Research Report Series, 2021
A key piece of a validity argument for a language assessment tool is clear overlap between assessment tasks and the target language use (TLU) domain (i.e., the domain description inference). The TOEFL 2000 Spoken and Written Academic Language (T2K-SWAL) corpus, which represents a variety of academic registers and disciplines in traditional…
Descriptors: Comparative Analysis, Second Language Learning, English (Second Language), Language Tests
Mundine, Jennifer – ProQuest LLC, 2016
Nursing programs have embraced distance learning in their curricula, but discussion is ongoing about course assignments and grading criteria to increase examination scores in nursing distance learning courses. Because course examinations are a predictor of success on the postgraduate licensing examination (NCLEX-RN), the purpose of this study was…
Descriptors: Nursing Education, Distance Education, Comparative Analysis, Assignments
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Attali, Yigal; Sinharay, Sandip – ETS Research Report Series, 2015
The "e-rater"® automated essay scoring system is used operationally in the scoring of "TOEFL iBT"® independent and integrated tasks. In this study we explored the psychometric added value of reporting four trait scores for each of these two tasks, beyond the total e-rater score.The four trait scores are word choice, grammatical…
Descriptors: Writing Tests, Scores, Language Tests, English (Second Language)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ali, Usama S.; Chang, Hua-Hua – ETS Research Report Series, 2014
Adaptive testing is advantageous in that it provides more efficient ability estimates with fewer items than linear testing does. Item-driven adaptive pretesting may also offer similar advantages, and verification of such a hypothesis about item calibration was the main objective of this study. A suitability index (SI) was introduced to adaptively…
Descriptors: Adaptive Testing, Simulation, Pretests Posttests, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sydorenko, Tetyana; Maynard, Carson; Guntly, Erin – TESL Canada Journal, 2014
The criteria by which raters judge pragmatic appropriateness of language learners' speech acts are underexamined, especially when raters evaluate extended discourse. To shed more light on this process, the present study investigated what factors are salient to raters when scoring pragmatic appropriateness of extended request sequences, and which…
Descriptors: Evaluators, Discourse Analysis, Pragmatics, Evaluation Criteria
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Veldkamp, Bernard P. – Psicologica: International Journal of Methodology and Experimental Psychology, 2010
Application of Bayesian item selection criteria in computerized adaptive testing might result in improvement of bias and MSE of the ability estimates. The question remains how to apply Bayesian item selection criteria in the context of constrained adaptive testing, where large numbers of specifications have to be taken into account in the item…
Descriptors: Selection, Criteria, Bayesian Statistics, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Wen-Chung; Huang, Sheng-Yun – Educational and Psychological Measurement, 2011
The one-parameter logistic model with ability-based guessing (1PL-AG) has been recently developed to account for effect of ability on guessing behavior in multiple-choice items. In this study, the authors developed algorithms for computerized classification testing under the 1PL-AG and conducted a series of simulations to evaluate their…
Descriptors: Computer Assisted Testing, Classification, Item Analysis, Probability
Peer reviewed Peer reviewed
Direct linkDirect link
Makransky, Guido; Glas, Cees A. W. – International Journal of Testing, 2013
Cognitive ability tests are widely used in organizations around the world because they have high predictive validity in selection contexts. Although these tests typically measure several subdomains, testing is usually carried out for a single subdomain at a time. This can be ineffective when the subdomains assessed are highly correlated. This…
Descriptors: Foreign Countries, Cognitive Ability, Adaptive Testing, Feedback (Response)
Peer reviewed Peer reviewed
Direct linkDirect link
Passos, Valeria Lima; Berger, Martijn P. F.; Tan, Frans E. S. – Journal of Educational and Behavioral Statistics, 2008
During the early stage of computerized adaptive testing (CAT), item selection criteria based on Fisher"s information often produce less stable latent trait estimates than the Kullback-Leibler global information criterion. Robustness against early stage instability has been reported for the D-optimality criterion in a polytomous CAT with the…
Descriptors: Computer Assisted Testing, Adaptive Testing, Evaluation Criteria, Item Analysis
Lin, Chuan-Ju – Journal of Technology, Learning, and Assessment, 2008
The automated assembly of alternate test forms for online delivery provides an alternative to computer-administered, fixed test forms, or computerized-adaptive tests when a testing program migrates from paper/pencil testing to computer-based testing. The weighted deviations model (WDM) heuristic particularly promising for automated test assembly…
Descriptors: Item Response Theory, Test Theory, Comparative Analysis, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Burrows, Steven; Shortis, Mark – Australasian Journal of Educational Technology, 2011
Online marking and feedback systems are critical for providing timely and accurate feedback to students and maintaining the integrity of results in large class teaching. Previous investigations have involved much in-house development and more consideration is needed for deploying or customising off the shelf solutions. Furthermore, keeping up to…
Descriptors: Foreign Countries, Integrated Learning Systems, Feedback (Response), Evaluation Criteria
Stricker, Lawrence J.; Attali, Yigal – Educational Testing Service, 2010
The principal aims of this study, a conceptual replication of an earlier investigation of the TOEFL[R] computer-based test, or TOEFL CBT, in Buenos Aires, Cairo, and Frankfurt, were to assess test takers' reported acceptance of the TOEFL Internet-based test, or TOEFL iBT[TM], and its associations with possible determinants of this acceptance and…
Descriptors: Computer Attitudes, Questionnaires, Comparative Analysis, Foreign Countries
Peer reviewed Peer reviewed
Wang, Tianyou; Kolen, Michael J. – Journal of Educational Measurement, 2001
Reviews research literature on comparability issues in computerized adaptive testing (CAT) and synthesizes issues specific to comparability and test security. Develops a framework for evaluating comparability that contains three categories of criteria: (1) validity; (2) psychometric property/reliability; and (3) statistical assumption/test…
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Criteria
Peer reviewed Peer reviewed
Farrell, Albert D. – Computers in Human Behavior, 1989
Argues that guidelines for evaluating computer applications within psychology are not having sufficient impact on professional practices because a gap exists between information available to users and information needed to make informed decisions. Surveys of software vendors and practicing psychologists are described to support this view and…
Descriptors: Comparative Analysis, Computer Assisted Testing, Computer Software, Decision Making
Previous Page | Next Page »
Pages: 1  |  2