NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 12 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Breyer, F. Jay; Rupp, André A.; Bridgeman, Brent – ETS Research Report Series, 2017
In this research report, we present an empirical argument for the use of a contributory scoring approach for the 2-essay writing assessment of the analytical writing section of the "GRE"® test in which human and machine scores are combined for score creation at the task and section levels. The approach was designed to replace a currently…
Descriptors: College Entrance Examinations, Scoring, Essay Tests, Writing Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Attali, Yigal; Sinharay, Sandip – ETS Research Report Series, 2015
The "e-rater"® automated essay scoring system is used operationally in the scoring of the argument and issue tasks that form the Analytical Writing measure of the "GRE"® General Test. For each of these tasks, this study explored the value added of reporting 4 trait scores for each of these 2 tasks over the total e-rater score.…
Descriptors: Scores, Computer Assisted Testing, Computer Software, Grammar
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ramineni, Chaitanya; Trapani, Catherine S.; Williamson, David M.; Davey, Tim; Bridgeman, Brent – ETS Research Report Series, 2012
Automated scoring models for the "e-rater"® scoring engine were built and evaluated for the "GRE"® argument and issue-writing tasks. Prompt-specific, generic, and generic with prompt-specific intercept scoring models were built and evaluation statistics such as weighted kappas, Pearson correlations, standardized difference in…
Descriptors: Scoring, Test Scoring Machines, Automation, Models
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Attali, Yigal; Bridgeman, Brent; Trapani, Catherine – Journal of Technology, Learning, and Assessment, 2010
A generic approach in automated essay scoring produces scores that have the same meaning across all prompts, existing or new, of a writing assessment. This is accomplished by using a single set of linguistic indicators (or features), a consistent way of combining and weighting these features into essay scores, and a focus on features that are not…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Test Scoring Machines
Peer reviewed Peer reviewed
Reilly, Richard R. – Educational and Psychological Measurement, 1975
Because previous reports have suggested that the lowered validity of tests scored with empirical option weights might be explained by a capitalization of the keying procedures on omitting tendencies, a procedure was devised to key options empirically with a "correction-for-guessing" constraint. (Author)
Descriptors: Achievement Tests, Graduate Study, Guessing (Tests), Scoring Formulas
Reilly, Richard R.; Jackson, Rex – 1972
Item options of shortened forms of the Graduate Record Examination Verbal and Quantitative tests were empirically weighted by two variants of a method originally attributed to Guttman. The first method assigned to each option of an item the mean standard score on the remaining items of all subjects choosing that option. The second procedure…
Descriptors: Correlation, Factor Analysis, Graduate Study, Scoring
Reilly, Richard R.; Jackson, Rex – 1972
Evidence on how the psychometric properties of verbal and quantitative academic aptitude tests are affected when item options are weighted using rather simple conceptual procedures is presented. This is discussed in connection with the scoring methods used on the Graduate Record Examinations. (DG)
Descriptors: Academic Aptitude, Achievement Tests, Aptitude Tests, Predictive Validity
Marco, Gary L. – 1968
Normative data were obtained on the performance of first-year graduate students on the Aptitude Test and Advanced Tests of the Graduate Record Examinations. The population consisted of students enrolled as full-time graduate students for the first time in the fall of 1964 in a college or university belonging to the Council of Graduate Schools…
Descriptors: Achievement Tests, Aptitude Tests, College Entrance Examinations, Error of Measurement
Angoff, William H.; Cowell, William R. – 1985
Linear and equipercentile equating conversions were developed for two forms of the Graduate Record Examinations (GRE) quantitative test and the verbal-plus-quantitative test. From a very large sample of students taking the GRE in October 1981, subpopulations were selected with respect to race, sex, field of study, and level of performance (defined…
Descriptors: Aptitude Tests, College Entrance Examinations, Equated Scores, Error of Measurement
Rock, Donald A. – 1974
First-year graduate students were asked to respond to a biographical questionnaire which emphasized motivational variables in addition to the usual demographic variables. It was hypothesized that the students could select from a group of ability measures the one best indicator of how well they would do in graduate school. To test this hypothesis…
Descriptors: Academic Achievement, Achievement Tests, Aptitude Tests, Biographical Inventories
PDF pending restoration PDF pending restoration
Lannholm, Gerald V. – 1968
One or more graduate departments from ten schools participated in studies of six different disciplines: chemistry, English, history, philosophy, physics, and psychology. Subjects were students who first enrolled for graduate study in these departments between 1957 and 1960. Predictor data obtained for the students included scores on the Graduate…
Descriptors: Academic Achievement, Achievement Tests, Aptitude Tests, Chemistry
Wilson, Kenneth M. – 1979
The Graduate Record Examinations (GRE) Cooperative Validity Studies Project began in 1975 to fill the need for cooperation between graduate schools and testing agencies and for graduate level validity studies. More than 150 data sets from the 39 participating schools, representing over 19 fields of study, were analyzed. The data confirmed earlier…
Descriptors: Academic Achievement, Agency Cooperation, College Entrance Examinations, Departments