NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 15 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Lim, Lyndon – Journal of Psychoeducational Assessment, 2020
This article outlines the development and validation of the Computer-Delivered Test (CDT) Acceptance Questionnaire (CTAQ). The CTAQ was designed to be a practical measure of CDT acceptance of Singapore secondary and high school students (Grades 7-12) toward taking tests within an e-assessment system. The stages of test (questionnaire item)…
Descriptors: Student Attitudes, High School Students, Secondary School Students, Computer Assisted Testing
Bronson Hui – ProQuest LLC, 2021
Vocabulary researchers have started expanding their assessment toolbox by incorporating timed tasks and psycholinguistic instruments (e.g., priming tasks) to gain insights into lexical development (e.g., Elgort, 2011; Godfroid, 2020b; Nakata & Elgort, 2020; Vandenberghe et al., 2021). These timed sensitive and implicit word measures differ…
Descriptors: Measures (Individuals), Construct Validity, Decision Making, Vocabulary Development
Steedle, Jeffrey; Pashley, Peter; Cho, YoungWoo – ACT, Inc., 2020
Three mode comparability studies were conducted on the following Saturday national ACT test dates: October 26, 2019, December 14, 2019, and February 8, 2020. The primary goal of these studies was to evaluate whether ACT scores exhibited mode effects between paper and online testing that would necessitate statistical adjustments to the online…
Descriptors: Test Format, Computer Assisted Testing, College Entrance Examinations, Scores
Li, Dongmei; Yi, Qing; Harris, Deborah – ACT, Inc., 2017
In preparation for online administration of the ACT® test, ACT conducted studies to examine the comparability of scores between online and paper administrations, including a timing study in fall 2013, a mode comparability study in spring 2014, and a second mode comparability study in spring 2015. This report presents major findings from these…
Descriptors: College Entrance Examinations, Computer Assisted Testing, Comparative Analysis, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Ihme, Jan Marten; Senkbeil, Martin; Goldhammer, Frank; Gerick, Julia – European Educational Research Journal, 2017
The combination of different item formats is found quite often in large scale assessments, and analyses on the dimensionality often indicate multi-dimensionality of tests regarding the task format. In ICILS 2013, three different item types (information-based response tasks, simulation tasks, and authoring tasks) were used to measure computer and…
Descriptors: Foreign Countries, Computer Literacy, Information Literacy, International Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Zou, Xiao-Ling; Chen, Yan-Min – Technology, Pedagogy and Education, 2016
The effects of computer and paper test media on EFL test-takers with different computer familiarity in writing scores and in the cognitive writing process have been comprehensively explored from the learners' aspect as well as on the basis of related theories and practice. The results indicate significant differences in test scores among the…
Descriptors: English (Second Language), Second Language Learning, Second Language Instruction, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Wan, Lei; Henly, George A. – Applied Measurement in Education, 2012
Many innovative item formats have been proposed over the past decade, but little empirical research has been conducted on their measurement properties. This study examines the reliability, efficiency, and construct validity of two innovative item formats--the figural response (FR) and constructed response (CR) formats used in a K-12 computerized…
Descriptors: Test Items, Test Format, Computer Assisted Testing, Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Morrison, Keith – Educational Research and Evaluation, 2013
This paper reviews the literature on comparing online and paper course evaluations in higher education and provides a case study of a very large randomised trial on the topic. It presents a mixed but generally optimistic picture of online course evaluations with respect to response rates, what they indicate, and how to increase them. The paper…
Descriptors: Literature Reviews, Course Evaluation, Case Studies, Higher Education
Walt, Nancy; Atwood, Kristin; Mann, Alex – Journal of Technology, Learning, and Assessment, 2008
The purpose of this study was to determine whether or not survey medium (electronic versus paper format) has a significant effect on the results achieved. To compare survey media, responses from elementary students to British Columbia's Satisfaction Survey were analyzed. Although this study was not experimental in design, the data set served as a…
Descriptors: Student Attitudes, Factor Analysis, Foreign Countries, Elementary School Students
Mircea-Pines, Walter J. – ProQuest LLC, 2009
This dissertation study examined the reliability and validity claims of a modified version of the Spanish Modern Language Association Foreign Language Proficiency Test for Teachers and Advanced Students administered at George Mason University (GMU). The study used the 1999 computerized GMU version that was administered to 277 test-takers via…
Descriptors: College Students, Advanced Students, Second Language Learning, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Pomplun, Mark; Custer, Michael – Journal of Educational Computing Research, 2005
This study investigated the equivalence of scores from computerized and paper-and-pencil formats of a series of K-3 reading screening tests. Concerns about score equivalence on the computerized formats were warranted because of the use of reading passages, computer unfamiliarity of primary school students, and teacher versus computer…
Descriptors: Screening Tests, Reading Tests, Family Income, Factor Analysis
Peer reviewed Peer reviewed
Henly, Susan J.; And Others – Applied Psychological Measurement, 1989
A group of covariance structure models was examined to ascertain the similarity between conventionally administered and computerized adaptive versions of the Differential Aptitude Test (DAT). Results for 332 students indicate that the computerized version of the DAT is an adequate representation of the conventional test battery. (TJH)
Descriptors: Ability Identification, Adaptive Testing, Comparative Testing, Computer Assisted Testing
Sykes, Robert C.; Ito, Kyoko – 1995
Whether the presence of bidimensionality has any effect on the adaptive recalibration of test items was studied through live-data simulation of computer adaptive testing (CAT) forms. The source data were examinee responses to the 298 scored multiple choice items of a licensure examination in a health care profession. Three 75-item part-forms,…
Descriptors: Adaptive Testing, Computer Assisted Testing, Difficulty Level, Estimation (Mathematics)
Ekstrom, Ruth B.; Bejar, Isaac I. – 1990
The history of the Educational Testing Service (ETS) Factor Kits is summarized. The original ETS Factor Kit was developed in 1954 and contained 51 items, three each for each of 15 factors and six for a 16th factor. The next edition was developed in 1963 and included adaptations (clones) of the defining tests instead of the exact copies. These…
Descriptors: Cognitive Processes, Cognitive Tests, Comparative Testing, Computer Assisted Testing
Staples, Jane G.; Luzzo, Darrell Anthony – 1999
The purpose of this study was to determine the measurement comparability of paper-and-pencil and multimedia versions of two career assessment components in the DISCOVER multimedia career and education planning system. Students in grade 9 (N=606) and 11 (N=416) completed paper-and-pencil and Compact Disc-interactive (CD-i) versions of the Unisex…
Descriptors: Career Counseling, Comparative Analysis, Computer Assisted Testing, Equated Scores