NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)1
Since 2006 (last 20 years)8
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 17 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Breyer, F. Jay; Rupp, André A.; Bridgeman, Brent – ETS Research Report Series, 2017
In this research report, we present an empirical argument for the use of a contributory scoring approach for the 2-essay writing assessment of the analytical writing section of the "GRE"® test in which human and machine scores are combined for score creation at the task and section levels. The approach was designed to replace a currently…
Descriptors: College Entrance Examinations, Scoring, Essay Tests, Writing Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Ou Lydia; Bridgeman, Brent; Gu, Lixiong; Xu, Jun; Kong, Nan – Educational and Psychological Measurement, 2015
Research on examinees' response changes on multiple-choice tests over the past 80 years has yielded some consistent findings, including that most examinees make score gains by changing answers. This study expands the research on response changes by focusing on a high-stakes admissions test--the Verbal Reasoning and Quantitative Reasoning measures…
Descriptors: College Entrance Examinations, High Stakes Tests, Graduate Study, Verbal Ability
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ramineni, Chaitanya; Trapani, Catherine S.; Williamson, David M.; Davey, Tim; Bridgeman, Brent – ETS Research Report Series, 2012
Scoring models for the "e-rater"® system were built and evaluated for the "TOEFL"® exam's independent and integrated writing prompts. Prompt-specific and generic scoring models were built, and evaluation statistics, such as weighted kappas, Pearson correlations, standardized differences in mean scores, and correlations with…
Descriptors: Scoring, Prompting, Evaluators, Computer Software
Peer reviewed Peer reviewed
Direct linkDirect link
Cho, Yeonsuk; Bridgeman, Brent – Language Testing, 2012
This study examined the relationship between scores on the TOEFL Internet-Based Test (TOEFL iBT[R]) and academic performance in higher education, defined here in terms of grade point average (GPA). The academic records for 2594 undergraduate and graduate students were collected from 10 universities in the United States. The data consisted of…
Descriptors: Evidence, Academic Records, Graduate Students, Universities
Peer reviewed Peer reviewed
Direct linkDirect link
Bridgeman, Brent; Powers, Donald; Stone, Elizabeth; Mollaun, Pamela – Language Testing, 2012
Scores assigned by trained raters and by an automated scoring system (SpeechRater[TM]) on the speaking section of the TOEFL iBT[TM] were validated against a communicative competence criterion. Specifically, a sample of 555 undergraduate students listened to speech samples from 184 examinees who took the Test of English as a Foreign Language…
Descriptors: Undergraduate Students, Speech Communication, Rating Scales, Scoring
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Attali, Yigal; Bridgeman, Brent; Trapani, Catherine – Journal of Technology, Learning, and Assessment, 2010
A generic approach in automated essay scoring produces scores that have the same meaning across all prompts, existing or new, of a writing assessment. This is accomplished by using a single set of linguistic indicators (or features), a consistent way of combining and weighting these features into essay scores, and a focus on features that are not…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Test Scoring Machines
Bridgeman, Brent; Laitusis, Cara Cahalan; Cline, Frederick – College Board, 2007
The current study used three data sources to estimate time requirements for different item types on the now current SAT Reasoning Test™. First, we estimated times from a computer-adaptive version of the SAT® (SAT CAT) that automatically recorded item times. Second, we observed students as they answered SAT questions under strict time limits and…
Descriptors: College Entrance Examinations, Test Items, Thinking Skills, Computer Assisted Testing
Bridgeman, Brent; McBride, Amanda; Monaghan, William – Educational Testing Service, 2004
Imposing time limits on tests can serve a range of important functions. Time limits are essential, for example, if speed of performance is an integral component of what is being measured, as would be the case when testing such skills as how quickly someone can type. Limiting testing time also helps contain expenses associated with test…
Descriptors: Computer Assisted Testing, Timed Tests, Test Results, Aptitude Tests
Peer reviewed Peer reviewed
Bridgeman, Brent; Lennon, Mary Lou; Jackenthal, Altamese – Applied Measurement in Education, 2003
Studied the effects of variations in screen size, resolution, and presentation delay on verbal and mathematics scores on a computerized test for 357 high school juniors. No significant differences were found for mathematics scores, but verbal scores were higher with the larger resolution display. (SLD)
Descriptors: Computer Assisted Testing, High School Students, High Schools, Mathematics Achievement
Peer reviewed Peer reviewed
Direct linkDirect link
Bridgeman, Brent; Cline, Frederick – Journal of Educational Measurement, 2004
Time limits on some computer-adaptive tests (CATs) are such that many examinees have difficulty finishing, and some examinees may be administered tests with more time-consuming items than others. Results from over 100,000 examinees suggested that about half of the examinees must guess on the final six questions of the analytical section of the…
Descriptors: Guessing (Tests), Timed Tests, Adaptive Testing, Computer Assisted Testing
Bridgeman, Brent; Potenza, Maria – 1998
Students taking the paper-based Scholastic Assessment Test (SAT) mathematics test are permitted to bring and use their own hand-held calculators, and this policy was continued for the computer-adaptive tests (CAT) designed for use in talent search programs. An on-screen calculator may also be used with the CAT. The bring-your-own option has raised…
Descriptors: Ability, Calculators, College Entrance Examinations, Computer Assisted Testing
Peer reviewed Peer reviewed
Bridgeman, Brent; Rock, Donald A. – Journal of Educational Measurement, 1993
Exploratory and confirmatory factor analyses were used to explore relationships among existing item types and three new computer-administered item types for the analytical scale of the Graduate Record Examination General Test. Results with 349 students indicate constructs the item types are measuring. (SLD)
Descriptors: College Entrance Examinations, College Students, Comparative Testing, Computer Assisted Testing
Bridgeman, Brent; Rock, Donald A. – 1993
Three new computer-administered item types for the analytical scale of the Graduate Record Examination (GRE) General Test were developed and evaluated. One item type was a free-response version of the current analytical reasoning item type. The second item type was a somewhat constrained free-response version of the pattern identification (or…
Descriptors: Adaptive Testing, College Entrance Examinations, College Students, Computer Assisted Testing
Peer reviewed Peer reviewed
Gallagher, Ann; Bridgeman, Brent; Cahalan, Cara – Journal of Educational Measurement, 2002
Examined data from several national testing programs to determine whether the change from paper-based administration to computer-based tests influences group differences in performance. Results from four college and graduate entrance examinations and a professional licensing test show that African Americans and, to a lesser degree, Hispanics,…
Descriptors: Blacks, College Entrance Examinations, Computer Assisted Testing, Ethnic Groups
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Morley, Mary; Bridgeman, Brent; Lawless, René – ETS Research Report Series, 2004
This study investigated the transfer of solution strategies between close variants of quantitative reasoning questions. Pre- and posttests were obtained from 406 college undergraduates, all of whom took the same posttest; pretests varied such that one group of participants saw close variants of one set of posttest items while other groups saw…
Descriptors: Test Items, Mathematics Tests, Problem Solving, Pretests Posttests
Previous Page | Next Page »
Pages: 1  |  2