NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 35 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Kirsch, Irwin; Braun, Henry – Large-scale Assessments in Education, 2020
Mounting concerns about the levels and distributions of human capital, as well as how they are associated with outcomes for individuals and societies, have contributed to an increase in the number of national and international surveys. These surveys not only examine skills among school-age and adult populations, they also facilitate evaluation of…
Descriptors: International Assessment, Computer Assisted Testing, Human Capital, Program Evaluation
Patrick C. Kyllonen; Amit Sevak; Teresa Ober; Ikkyu Choi; Jesse Sparks; Daniel Fishtein – ETS Research Institute, 2024
Assessment refers to a broad array of approaches for measuring or evaluating a person's (or group of persons') skills, behaviors, dispositions, or other attributes. Assessments range from standardized tests used in admissions, employee selection, licensure examinations, and domestic and international largescale assessments of cognitive and…
Descriptors: Performance Based Assessment, Evaluation Criteria, Evaluation Methods, Test Bias
Barnett, Elisabeth A.; Reddy, Vikash – Center for the Analysis of Postsecondary Readiness, 2017
Many postsecondary institutions, and community colleges in particular, require that students demonstrate specified levels of literacy and numeracy before taking college-level courses. Typically, students have been assessed using two widely available tests--ACCUPLACER and Compass. However, placement testing practice is beginning to change for three…
Descriptors: Student Placement, College Entrance Examinations, Educational Practices, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Galaczi, Evelina; Taylor, Lynda – Language Assessment Quarterly, 2018
This article on interactional competence provides an overview of the historical influences that have shaped theoretical conceptualisations of this construct as it relates to spoken language use, leading to the current view of it as involving both cognitive and social dimensions, and then describes its operationalisation in tests and assessment…
Descriptors: Communicative Competence (Languages), Second Language Learning, Language Tests, Evaluation Criteria
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Rybanov, Alexander Aleksandrovich – Turkish Online Journal of Distance Education, 2013
Is offered the set of criteria for assessing efficiency of the process forming the answers to multiple-choice test items. To increase accuracy of computer-assisted testing results, it is suggested to assess dynamics of the process of forming the final answer using the following factors: loss of time factor and correct choice factor. The model…
Descriptors: Evaluation Criteria, Efficiency, Multiple Choice Tests, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Lin, Chuan-Ju – Educational and Psychological Measurement, 2011
This study compares four item selection criteria for a two-category computerized classification testing: (1) Fisher information (FI), (2) Kullback-Leibler information (KLI), (3) weighted log-odds ratio (WLOR), and (4) mutual information (MI), with respect to the efficiency and accuracy of classification decision using the sequential probability…
Descriptors: Computer Assisted Testing, Adaptive Testing, Selection, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Ramineni, Chaitanya; Williamson, David M. – Assessing Writing, 2013
In this paper, we provide an overview of psychometric procedures and guidelines Educational Testing Service (ETS) uses to evaluate automated essay scoring for operational use. We briefly describe the e-rater system, the procedures and criteria used to evaluate e-rater, implications for a range of potential uses of e-rater, and directions for…
Descriptors: Educational Testing, Guidelines, Scoring, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Deane, Paul – Assessing Writing, 2013
This paper examines the construct measured by automated essay scoring (AES) systems. AES systems measure features of the text structure, linguistic structure, and conventional print form of essays; as such, the systems primarily measure text production skills. In the current state-of-the-art, AES provide little direct evidence about such matters…
Descriptors: Scoring, Essays, Text Structure, Writing (Composition)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Becker, Kirk A.; Bergstrom, Betty A. – Practical Assessment, Research & Evaluation, 2013
The need for increased exam security, improved test formats, more flexible scheduling, better measurement, and more efficient administrative processes has caused testing agencies to consider converting the administration of their exams from paper-and-pencil to computer-based testing (CBT). Many decisions must be made in order to provide an optimal…
Descriptors: Testing, Models, Testing Programs, Program Administration
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Faurer, Judson C. – Contemporary Issues in Education Research, 2013
Are prospective employers getting "quality" educated, degreed applicants and are academic institutions that offer online degree programs ensuring the quality control of the courses/programs offered? The issue specifically addressed in this paper is not with all institutions offering degrees through online programs or even with all online…
Descriptors: Online Courses, Validity, Grades (Scholastic), Quality Control
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Veldkamp, Bernard P. – Psicologica: International Journal of Methodology and Experimental Psychology, 2010
Application of Bayesian item selection criteria in computerized adaptive testing might result in improvement of bias and MSE of the ability estimates. The question remains how to apply Bayesian item selection criteria in the context of constrained adaptive testing, where large numbers of specifications have to be taken into account in the item…
Descriptors: Selection, Criteria, Bayesian Statistics, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Steinmetz, Jean-Paul; Brunner, Martin; Loarer, Even; Houssemand, Claude – Psychological Assessment, 2010
The Wisconsin Card Sorting Test (WCST) assesses executive and frontal lobe function and can be administered manually or by computer. Despite the widespread application of the 2 versions, the psychometric equivalence of their scores has rarely been evaluated and only a limited set of criteria has been considered. The present experimental study (N =…
Descriptors: Computer Assisted Testing, Psychometrics, Test Theory, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Passos, Valeria Lima; Berger, Martijn P. F.; Tan, Frans E. S. – Journal of Educational and Behavioral Statistics, 2008
During the early stage of computerized adaptive testing (CAT), item selection criteria based on Fisher"s information often produce less stable latent trait estimates than the Kullback-Leibler global information criterion. Robustness against early stage instability has been reported for the D-optimality criterion in a polytomous CAT with the…
Descriptors: Computer Assisted Testing, Adaptive Testing, Evaluation Criteria, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Burrows, Steven; Shortis, Mark – Australasian Journal of Educational Technology, 2011
Online marking and feedback systems are critical for providing timely and accurate feedback to students and maintaining the integrity of results in large class teaching. Previous investigations have involved much in-house development and more consideration is needed for deploying or customising off the shelf solutions. Furthermore, keeping up to…
Descriptors: Foreign Countries, Integrated Learning Systems, Feedback (Response), Evaluation Criteria
Peer reviewed Peer reviewed
Direct linkDirect link
Yin, Alexander C.; Volkwein, J. Fredericks – New Directions for Institutional Research, 2010
After surveying 1,827 students in their final year at eighty randomly selected two-year and four-year public and private institutions, American Institutes for Research (2006) reported that approximately 30 percent of students in two-year institutions and nearly 20 percent of students in four-year institutions have only basic quantitative…
Descriptors: Standardized Tests, Basic Skills, College Admission, Educational Testing
Previous Page | Next Page »
Pages: 1  |  2  |  3