Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 9 |
Since 2006 (last 20 years) | 20 |
Descriptor
College Entrance Examinations | 22 |
Computer Assisted Testing | 22 |
Correlation | 22 |
Scores | 10 |
Statistical Analysis | 7 |
Comparative Analysis | 6 |
Mathematics Tests | 6 |
Test Items | 6 |
Grade Point Average | 5 |
Graduate Study | 5 |
Predictor Variables | 5 |
More ▼ |
Source
Author
Attali, Yigal | 4 |
Bridgeman, Brent | 4 |
Breyer, F. Jay | 2 |
Shaw, Emily J. | 2 |
Ariel, Adelaide | 1 |
Beigman Klebanov, Beata | 1 |
Bejar, Isaac I. | 1 |
Berberoglu, Giray | 1 |
Braude, Eric John | 1 |
Bulut, Okan | 1 |
Burstein, Jill | 1 |
More ▼ |
Publication Type
Reports - Research | 21 |
Journal Articles | 16 |
Dissertations/Theses -… | 1 |
Numerical/Quantitative Data | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Higher Education | 18 |
Postsecondary Education | 12 |
Secondary Education | 4 |
High Schools | 3 |
Elementary Education | 2 |
Elementary Secondary Education | 2 |
Middle Schools | 2 |
Grade 5 | 1 |
Grade 6 | 1 |
Grade 8 | 1 |
Intermediate Grades | 1 |
More ▼ |
Audience
Location
Turkey | 2 |
Massachusetts | 1 |
Missouri | 1 |
United States | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Digital SAT® Score Relationships with Other Educational Measures: Early Convergent Validity Evidence
Marini, Jessica P.; Westrick, Paul A.; Young, Linda; Shaw, Emily J. – College Board, 2022
This study examines relationships between digital SAT scores and other relevant educational measures, such as high school grade point average (HSGPA), PSAT/NMSQT Total score, and Average AP Exam score, and compares those relationships to current paper and pencil SAT score relationships with the same measures. This information can provide…
Descriptors: Scores, College Entrance Examinations, Comparative Analysis, Test Format
Wang, Lu; Steedle, Jeffrey – ACT, Inc., 2020
In recent ACT mode comparability studies, students testing on laptop or desktop computers earned slightly higher scores on average than students who tested on paper, especially on the ACT® reading and English tests (Li et al., 2017). Equating procedures adjust for such "mode effects" to make ACT scores comparable regardless of testing…
Descriptors: Test Format, Reading Tests, Language Tests, English
Breyer, F. Jay; Rupp, André A.; Bridgeman, Brent – ETS Research Report Series, 2017
In this research report, we present an empirical argument for the use of a contributory scoring approach for the 2-essay writing assessment of the analytical writing section of the "GRE"® test in which human and machine scores are combined for score creation at the task and section levels. The approach was designed to replace a currently…
Descriptors: College Entrance Examinations, Scoring, Essay Tests, Writing Evaluation
Kalender, Ilker; Berberoglu, Giray – Educational Sciences: Theory and Practice, 2017
Admission into university in Turkey is very competitive and features a number of practical problems regarding not only the test administration process itself, but also concerning the psychometric properties of test scores. Computerized adaptive testing (CAT) is seen as a possible alternative approach to solve these problems. In the first phase of…
Descriptors: Foreign Countries, Computer Assisted Testing, College Admission, Simulation
Chen, Jing; Zhang, Mo; Bejar, Isaac I. – ETS Research Report Series, 2017
Automated essay scoring (AES) generally computes essay scores as a function of macrofeatures derived from a set of microfeatures extracted from the text using natural language processing (NLP). In the "e-rater"® automated scoring engine, developed at "Educational Testing Service" (ETS) for the automated scoring of essays, each…
Descriptors: Computer Assisted Testing, Scoring, Automation, Essay Tests
Li, Dongmei; Yi, Qing; Harris, Deborah – ACT, Inc., 2017
In preparation for online administration of the ACT® test, ACT conducted studies to examine the comparability of scores between online and paper administrations, including a timing study in fall 2013, a mode comparability study in spring 2014, and a second mode comparability study in spring 2015. This report presents major findings from these…
Descriptors: College Entrance Examinations, Computer Assisted Testing, Comparative Analysis, Test Format
Burstein, Jill; McCaffrey, Dan; Beigman Klebanov, Beata; Ling, Guangming – Grantee Submission, 2017
No significant body of research examines writing achievement and the specific skills and knowledge in the writing domain for postsecondary (college) students in the U.S., even though many at-risk students lack the prerequisite writing skills required to persist in their education. This paper addresses this gap through a novel…
Descriptors: Computer Software, Writing Evaluation, Writing Achievement, College Students
Peters, Joshua A. – ProQuest LLC, 2016
There is a lack of knowledge in whether there is a difference in results for students on paper and pencil high stakes assessments and computer-based high stakes assessments when considering race and/or free and reduced lunch status. The purpose of this study was to add new knowledge to this field of study by determining whether there is a…
Descriptors: Comparative Analysis, Computer Assisted Testing, Lunch Programs, High Stakes Tests
Breyer, F. Jay; Attali, Yigal; Williamson, David M.; Ridolfi-McCulla, Laura; Ramineni, Chaitanya; Duchnowski, Matthew; Harris, April – ETS Research Report Series, 2014
In this research, we investigated the feasibility of implementing the "e-rater"® scoring engine as a check score in place of all-human scoring for the "Graduate Record Examinations"® ("GRE"®) revised General Test (rGRE) Analytical Writing measure. This report provides the scientific basis for the use of e-rater as a…
Descriptors: Computer Software, Computer Assisted Testing, Scoring, College Entrance Examinations
Attali, Yigal – ETS Research Report Series, 2014
Previous research on calculator use in standardized assessments of quantitative ability focused on the effect of calculator availability on item difficulty and on whether test developers can predict these effects. With the introduction of an on-screen calculator on the Quantitative Reasoning measure of the "GRE"® revised General Test, it…
Descriptors: College Entrance Examinations, Graduate Study, Calculators, Test Items
Pike, Gary R.; Hansen, Michele J.; Childress, Janice E. – Journal of College Student Retention: Research, Theory & Practice, 2014
The present research examined the extent to which pre-college characteristics, high school experiences, college expectations, and initial enrollment characteristics were related to graduation from college. Data from admission applications, the "ACT Compass" survey, and initial enrollment measures for Fall 2004 and Fall 2005 first-time…
Descriptors: Student Characteristics, Educational Experience, Correlation, Expectation
Attali, Yigal; Sinharay, Sandip – ETS Research Report Series, 2015
The "e-rater"® automated essay scoring system is used operationally in the scoring of the argument and issue tasks that form the Analytical Writing measure of the "GRE"® General Test. For each of these tasks, this study explored the value added of reporting 4 trait scores for each of these 2 tasks over the total e-rater score.…
Descriptors: Scores, Computer Assisted Testing, Computer Software, Grammar
Thompson, Meredith Myra; Braude, Eric John – Journal of Educational Computing Research, 2016
The assessment of learning in large online courses requires tools that are valid, reliable, easy to administer, and can be automatically scored. We have evaluated an online assessment and learning tool called Knowledge Assembly, or Knowla. Knowla measures a student's knowledge in a particular subject by having the student assemble a set of…
Descriptors: Computer Assisted Testing, Teaching Methods, Online Courses, Critical Thinking
Cho, Yeonsuk; Bridgeman, Brent – Language Testing, 2012
This study examined the relationship between scores on the TOEFL Internet-Based Test (TOEFL iBT[R]) and academic performance in higher education, defined here in terms of grade point average (GPA). The academic records for 2594 undergraduate and graduate students were collected from 10 universities in the United States. The data consisted of…
Descriptors: Evidence, Academic Records, Graduate Students, Universities
Bulut, Okan; Kan, Adnan – Eurasian Journal of Educational Research, 2012
Problem Statement: Computerized adaptive testing (CAT) is a sophisticated and efficient way of delivering examinations. In CAT, items for each examinee are selected from an item bank based on the examinee's responses to the items. In this way, the difficulty level of the test is adjusted based on the examinee's ability level. Instead of…
Descriptors: Adaptive Testing, Computer Assisted Testing, College Entrance Examinations, Graduate Students
Previous Page | Next Page »
Pages: 1 | 2