NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
Pell Grant Program1
What Works Clearinghouse Rating
Showing all 10 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Boote, Stacy K.; Boote, David N.; Williamson, Steven – Cogent Education, 2021
Several decades of research suggesting differences in test performance across paper-based and computer-based assessments have been largely ameliorated through attention to test presentation equivalence, though no studies to date have focused on graph comprehension items. Test items requiring graph comprehension are increasingly common but may be…
Descriptors: Graduate Students, Masters Programs, Business Administration Education, Graphs
Wang, Lu; Steedle, Jeffrey – ACT, Inc., 2020
In recent ACT mode comparability studies, students testing on laptop or desktop computers earned slightly higher scores on average than students who tested on paper, especially on the ACT® reading and English tests (Li et al., 2017). Equating procedures adjust for such "mode effects" to make ACT scores comparable regardless of testing…
Descriptors: Test Format, Reading Tests, Language Tests, English
Peer reviewed Peer reviewed
Direct linkDirect link
Smolinsky, Lawrence; Marx, Brian D.; Olafsson, Gestur; Ma, Yanxia A. – Journal of Educational Computing Research, 2020
Computer-based testing is an expanding use of technology offering advantages to teachers and students. We studied Calculus II classes for science, technology, engineering, and mathematics majors using different testing modes. Three sections with 324 students employed: paper-and-pencil testing, computer-based testing, and both. Computer tests gave…
Descriptors: Test Format, Computer Assisted Testing, Paper (Material), Calculus
Steedle, Jeffrey; Pashley, Peter; Cho, YoungWoo – ACT, Inc., 2020
Three mode comparability studies were conducted on the following Saturday national ACT test dates: October 26, 2019, December 14, 2019, and February 8, 2020. The primary goal of these studies was to evaluate whether ACT scores exhibited mode effects between paper and online testing that would necessitate statistical adjustments to the online…
Descriptors: Test Format, Computer Assisted Testing, College Entrance Examinations, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Park, Enoch; Martin, Florence; Lambert, Richard – Quarterly Review of Distance Education, 2019
The purpose of this study was to identify the potential factors that could predict college students' success in a large-size undergraduate hybrid learning course. Predictive values of students' demographic and academic background variables were examined to establish standardized models of contribution toward final grades. Next, patterns of student…
Descriptors: Predictor Variables, Academic Achievement, Undergraduate Students, Student Participation
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Ou Lydia; Bridgeman, Brent; Gu, Lixiong; Xu, Jun; Kong, Nan – Educational and Psychological Measurement, 2015
Research on examinees' response changes on multiple-choice tests over the past 80 years has yielded some consistent findings, including that most examinees make score gains by changing answers. This study expands the research on response changes by focusing on a high-stakes admissions test--the Verbal Reasoning and Quantitative Reasoning measures…
Descriptors: College Entrance Examinations, High Stakes Tests, Graduate Study, Verbal Ability
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Attali, Yigal – ETS Research Report Series, 2014
Previous research on calculator use in standardized assessments of quantitative ability focused on the effect of calculator availability on item difficulty and on whether test developers can predict these effects. With the introduction of an on-screen calculator on the Quantitative Reasoning measure of the "GRE"® revised General Test, it…
Descriptors: College Entrance Examinations, Graduate Study, Calculators, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Stankov, Lazar; Lee, Jihyun – Journal of Educational Psychology, 2008
This article examines the nature of confidence in relation to abilities, personality, and metacognition. Confidence scores were collected during the administration of Reading and Listening sections of the Test of English as a Foreign Language Internet-Based Test (TOEFL iBT) to 824 native speakers of English. Those confidence scores were correlated…
Descriptors: Grade Point Average, Validity, Cognitive Tests, Personality
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Handwerk, Phil – ETS Research Report Series, 2007
Online high schools are growing significantly in number, popularity, and function. However, little empirical data has been published about the effectiveness of these institutions. This research examined the frequency of group work and extended essay writing among online Advanced Placement Program® (AP®) students, and how these tasks may have…
Descriptors: Advanced Placement Programs, Advanced Placement, Computer Assisted Testing, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Stricker, Lawrence J.; Wilder, Gita Z.; Bridgeman, Brent – International Journal of Testing, 2006
The aim of this study was to assess test takers' attitudes and beliefs about an admissions test used extensively in graduate schools of business in the United States, the Graduate Management Admission Test (GMAT), and the relationships of these attitudes and beliefs to test performance. A set of attitude and belief items was administered by…
Descriptors: Computer Assisted Testing, Test Wiseness, Gender Differences, Ethnic Groups