NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)5
Since 2006 (last 20 years)20
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 49 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Klieger, David M.; Bridgeman, Brent; Tannenbaum, Richard J.; Cline, Frederick A.; Olivera-Aguilar, Margarita – ETS Research Report Series, 2018
Educational Testing Service (ETS), working with 21 U.S. law schools, evaluated the predictive validity of the GRE® General Test using a sample of 1,587 current and graduated law students. Results indicated that the GRE is a strong, generalizably valid predictor of first-year law school grades and that it provides useful information even when…
Descriptors: College Entrance Examinations, Graduate Study, Test Validity, Scores
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Breyer, F. Jay; Rupp, André A.; Bridgeman, Brent – ETS Research Report Series, 2017
In this research report, we present an empirical argument for the use of a contributory scoring approach for the 2-essay writing assessment of the analytical writing section of the "GRE"® test in which human and machine scores are combined for score creation at the task and section levels. The approach was designed to replace a currently…
Descriptors: College Entrance Examinations, Scoring, Essay Tests, Writing Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Bridgeman, Brent – Educational Measurement: Issues and Practice, 2016
Scores on essay-based assessments that are part of standardized admissions tests are typically given relatively little weight in admissions decisions compared to the weight given to scores from multiple-choice assessments. Evidence is presented to suggest that more weight should be given to these assessments. The reliability of the writing scores…
Descriptors: Multiple Choice Tests, Scores, Standardized Tests, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Oliveri, Maria Elena; Lawless, Rene; Robin, Frederic; Bridgeman, Brent – Applied Measurement in Education, 2018
We analyzed a pool of items from an admissions test for differential item functioning (DIF) for groups based on age, socioeconomic status, citizenship, or English language status using Mantel-Haenszel and item response theory. DIF items were systematically examined to identify its possible sources by item type, content, and wording. DIF was…
Descriptors: Test Bias, Comparative Analysis, Item Banks, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Bridgeman, Brent; Cho, Yeonsuk; DiPietro, Stephen – Language Testing, 2016
Data from 787 international undergraduate students at an urban university in the United States were used to demonstrate the importance of separating a sample into meaningful subgroups in order to demonstrate the ability of an English language assessment to predict the first-year grade point average (GPA). For example, when all students were pooled…
Descriptors: Grade Prediction, English Curriculum, Language Tests, Undergraduate Students
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Ou Lydia; Bridgeman, Brent; Gu, Lixiong; Xu, Jun; Kong, Nan – Educational and Psychological Measurement, 2015
Research on examinees' response changes on multiple-choice tests over the past 80 years has yielded some consistent findings, including that most examinees make score gains by changing answers. This study expands the research on response changes by focusing on a high-stakes admissions test--the Verbal Reasoning and Quantitative Reasoning measures…
Descriptors: College Entrance Examinations, High Stakes Tests, Graduate Study, Verbal Ability
Peer reviewed Peer reviewed
Direct linkDirect link
Bridgeman, Brent; Trapani, Catherine; Attali, Yigal – Applied Measurement in Education, 2012
Essay scores generated by machine and by human raters are generally comparable; that is, they can produce scores with similar means and standard deviations, and machine scores generally correlate as highly with human scores as scores from one human correlate with scores from another human. Although human and machine essay scores are highly related…
Descriptors: Scoring, Essay Tests, College Entrance Examinations, High Stakes Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Cho, Yeonsuk; Bridgeman, Brent – Language Testing, 2012
This study examined the relationship between scores on the TOEFL Internet-Based Test (TOEFL iBT[R]) and academic performance in higher education, defined here in terms of grade point average (GPA). The academic records for 2594 undergraduate and graduate students were collected from 10 universities in the United States. The data consisted of…
Descriptors: Evidence, Academic Records, Graduate Students, Universities
Peer reviewed Peer reviewed
Direct linkDirect link
Bridgeman, Brent; Burton, Nancy; Cline, Frederick – Applied Measurement in Education, 2009
Descriptions of validity results based solely on correlation coefficients or percent of the variance accounted for are not merely difficult to interpret, they are likely to be misinterpreted. Predictors that apparently account for a small percent of the variance may actually be highly important from a practical perspective. This study combined two…
Descriptors: Predictive Validity, College Entrance Examinations, Graduate Study, Grade Point Average
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ramineni, Chaitanya; Trapani, Catherine S.; Williamson, David M.; Davey, Tim; Bridgeman, Brent – ETS Research Report Series, 2012
Automated scoring models for the "e-rater"® scoring engine were built and evaluated for the "GRE"® argument and issue-writing tasks. Prompt-specific, generic, and generic with prompt-specific intercept scoring models were built and evaluation statistics such as weighted kappas, Pearson correlations, standardized difference in…
Descriptors: Scoring, Test Scoring Machines, Automation, Models
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bridgeman, Brent; Cline, Frederick; Levin, Jutta – ETS Research Report Series, 2008
In order to estimate the likely effects on item difficulty when a calculator becomes available on the quantitative section of the Graduate Record Examinations® (GRE®-Q), 168 items (in six 28-item forms) were administered either with or without access to an on-screen four-function calculator. The forms were administered as a special research…
Descriptors: College Entrance Examinations, Graduate Study, Calculators, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bridgeman, Brent; Pollack, Judith; Burton, Nancy – Journal of College Admission, 2008
Two methods of showing the ability of high school grades (high school grade point averages) and SAT scores to predict cumulative grades in different types of college courses were evaluated in a sample of 26 colleges. Each college contributed data from three cohorts of entering freshmen, and each cohort was followed for at least four years.…
Descriptors: Prediction, Grade Point Average, Academic Achievement, Social Sciences
Peer reviewed Peer reviewed
Direct linkDirect link
Bridgeman, Brent; Trapani, Catherine; Curley, Edward – Journal of Educational Measurement, 2004
The impact of allowing more time for each question on the SAT I: Reasoning Test scores was estimated by embedding sections with a reduced number of questions into the standard 30-minute equating section of two national test administrations. Thus, for example, questions were deleted from a verbal section that contained 35 questions to produce forms…
Descriptors: College Entrance Examinations, Test Length, Scores
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bridgeman, Brent; Burton, Nancy; Cline, Frederick – ETS Research Report Series, 2008
Descriptions of validity results for the GRE® General Test based solely on correlation coefficients or percentage of the variance accounted for are not merely difficult to interpret, they are likely to be misinterpreted. Predictors that apparently account for a small percentage of the variance may actually be highly important from a practical…
Descriptors: College Entrance Examinations, Graduate Study, Test Validity, Grades (Scholastic)
Bridgeman, Brent; Burton, Nancy; Cline, Frederick – 2000
Using data from a sample of 10 colleges at which most students had taken both the SAT I: Reasoning Test and SAT II: Subject Tests researchers simulated the effects of making selection decisions using SAT II scores in place of SAT I scores. Students in each college were treated as forming the applicant pool for a more select college, and the top…
Descriptors: College Applicants, College Entrance Examinations, Higher Education, Selection
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4