NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 9 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Brown, Kevin – CEA Forum, 2015
In this article, the author describes his project to take every standardized exam English majors students take. During the summer and fall semesters of 2012, the author signed up for and took the GRE General Test, the Praxis Content Area Exam (English Language, Literature, and Composition: Content Knowledge), the Senior Major Field Tests in…
Descriptors: College Faculty, College English, Test Preparation, Standardized Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Attali, Yigal; Bridgeman, Brent; Trapani, Catherine – Journal of Technology, Learning, and Assessment, 2010
A generic approach in automated essay scoring produces scores that have the same meaning across all prompts, existing or new, of a writing assessment. This is accomplished by using a single set of linguistic indicators (or features), a consistent way of combining and weighting these features into essay scores, and a focus on features that are not…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Test Scoring Machines
Quinlan, Thomas; Higgins, Derrick; Wolff, Susanne – Educational Testing Service, 2009
This report evaluates the construct coverage of the e-rater[R[ scoring engine. The matter of construct coverage depends on whether one defines writing skill, in terms of process or product. Originally, the e-rater engine consisted of a large set of components with a proven ability to predict human holistic scores. By organizing these capabilities…
Descriptors: Guides, Writing Skills, Factor Analysis, Writing Tests
Peer reviewed Peer reviewed
Chang, Hua-Hua; Qian, Jiahe; Yang, Zhiliang – Applied Psychological Measurement, 2001
Proposed a refinement, based on the stratification of items developed by D. Weiss (1973), of the computerized adaptive testing item selection procedure of H. Chang and Z. Ying (1999). Simulation studies using an item bank from the Graduate Record Examination show the benefits of the new procedure. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Selection, Simulation
Carlson, Sybil B.; Ward, William C. – 1988
Issues concerning the cost and feasibility of using Formulating Hypotheses (FH) test item types for the Graduate Record Examinations have slowed research into their use. This project focused on two major issues that need to be addressed in considering FH items for operational use: the costs of scoring and the assignment of scores along a range of…
Descriptors: Adaptive Testing, Computer Assisted Testing, Costs, Pilot Projects
Slater, Sharon C.; Schaeffer, Gary A. – 1996
The General Computer Adaptive Test (CAT) of the Graduate Record Examinations (GRE) includes three operational sections that are separately timed and scored. A "no score" is reported if the examinee answers fewer than 80% of the items or if the examinee does not answer all of the items and leaves the section before time expires. The 80%…
Descriptors: Adaptive Testing, College Students, Computer Assisted Testing, Equal Education
Schaeffer, Gary A.; And Others – 1995
This report summarizes the results from two studies. The first assessed the comparability of scores derived from linear computer-based (CBT) and computer adaptive (CAT) versions of the three Graduate Record Examinations (GRE) General Test measures. A verbal CAT was taken by 1,507, a quantitative CAT by 1,354, and an analytical CAT by 995…
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Equated Scores
Eignor, Daniel R.; And Others – 1993
The extensive computer simulation work done in developing the computer adaptive versions of the Graduate Record Examinations (GRE) Board General Test and the College Board Admissions Testing Program (ATP) Scholastic Aptitude Test (SAT) is described in this report. Both the GRE General and SAT computer adaptive tests (CATs), which are fixed length…
Descriptors: Adaptive Testing, Algorithms, Case Studies, College Entrance Examinations
Sebrechts, Marc M.; And Others – 1993
This report describes the development of a new tool for assessment research in graduate education. The tool, the Algebra Assessment System, is based on GIDE, a pre-existing program that diagnostically analyzes complex constructed responses to algebra word problems. The project had three goals. The first goal was to build a generically usable…
Descriptors: Algebra, College Entrance Examinations, Computer Assisted Testing, Computer Interfaces