Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 9 |
Descriptor
Computer Assisted Testing | 11 |
Evaluation Methods | 11 |
Intermode Differences | 11 |
Foreign Countries | 5 |
Comparative Testing | 4 |
Higher Education | 4 |
Item Analysis | 4 |
Academic Achievement | 3 |
Evaluation Research | 3 |
Likert Scales | 3 |
Questionnaires | 3 |
More ▼ |
Source
Author
Publication Type
Journal Articles | 11 |
Reports - Research | 10 |
Reports - Evaluative | 1 |
Education Level
Higher Education | 5 |
Postsecondary Education | 3 |
Elementary Secondary Education | 2 |
Elementary Education | 1 |
Grade 6 | 1 |
High Schools | 1 |
Middle Schools | 1 |
Secondary Education | 1 |
Audience
Location
United Kingdom | 2 |
China | 1 |
New Zealand | 1 |
Taiwan | 1 |
Texas | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Jacoby, Jennifer C.; Heugh, Sheelagh; Bax, Christopher; Branford-White, Christopher – Innovations in Education and Teaching International, 2014
The student cohort on the University Science Extended Degree (SED) course is diverse in terms of educational experience. One of the key facets of teaching at this level is to engage and prepare students for higher levels of education in the sciences. The purpose of this evaluation is to relate a specific virtual framework, designed for students…
Descriptors: Formative Evaluation, Instructional Improvement, Educational Experience, Biology
Sopina, Elizaveta; McNeill, Rob – Assessment & Evaluation in Higher Education, 2015
Feedback can have a great impact on student learning. However, in order for it to be effective, feedback needs to be of high quality. Electronic marking has been one of the latest adaptations of technology in teaching and offers a new format of delivering feedback. There is little research investigating the impact the format of feedback has on…
Descriptors: Higher Education, Feedback (Response), Delivery Systems, Computer Assisted Testing
Morrison, Keith – Educational Research and Evaluation, 2013
This paper reviews the literature on comparing online and paper course evaluations in higher education and provides a case study of a very large randomised trial on the topic. It presents a mixed but generally optimistic picture of online course evaluations with respect to response rates, what they indicate, and how to increase them. The paper…
Descriptors: Literature Reviews, Course Evaluation, Case Studies, Higher Education
King, Chula G.; Guyette, Roger W., Jr.; Piotrowski, Chris – Journal of Educators Online, 2009
Academic integrity has been a perennial issue in higher education. Undoubtedly, the advent of the Internet and advances in user-friendly technological devices have spurred both concern on the part of faculty and research interest in the academic community regarding inappropriate and unethical behavior on the part of students. This study is…
Descriptors: Cheating, Integrity, Ethics, Business Education
Kim, Do-Hong; Huynh, Huynh – Journal of Technology, Learning, and Assessment, 2007
This study examined comparability of student scores obtained from computerized and paper-and-pencil formats of the large-scale statewide end-of-course (EOC) examinations in the two subject areas of Algebra and Biology. Evidence in support of comparability of computerized and paper-based tests was sought by examining scale scores, item parameter…
Descriptors: Computer Assisted Testing, Measures (Individuals), Biology, Algebra
Johnson, Martin; Nadas, Rita – Learning, Media and Technology, 2009
Within large scale educational assessment agencies in the UK, there has been a shift towards assessors marking digitally scanned copies rather than the original paper scripts that were traditionally used. This project uses extended essay examination scripts to consider whether the mode in which an essay is read potentially influences the…
Descriptors: Reading Comprehension, Educational Assessment, Internet, Essay Tests
Puhan, Gautam; Boughton, Keith; Kim, Sooyeon – Journal of Technology, Learning, and Assessment, 2007
The study evaluated the comparability of two versions of a certification test: a paper-and-pencil test (PPT) and computer-based test (CBT). An effect size measure known as Cohen's d and differential item functioning (DIF) analyses were used as measures of comparability at the test and item levels, respectively. Results indicated that the effect…
Descriptors: Computer Assisted Testing, Effect Size, Test Bias, Mathematics Tests
Wang, Jinhao; Brown, Michelle Stallone – Journal of Technology, Learning, and Assessment, 2007
The current research was conducted to investigate the validity of automated essay scoring (AES) by comparing group mean scores assigned by an AES tool, IntelliMetric [TM] and human raters. Data collection included administering the Texas version of the WriterPlacer "Plus" test and obtaining scores assigned by IntelliMetric [TM] and by…
Descriptors: Test Scoring Machines, Scoring, Comparative Testing, Intermode Differences
Lai, Ah-Fur; Chen, Deng-Jyi; Chen, Shu-Ling – Journal of Educational Multimedia and Hypermedia, 2008
The IRT (Item Response Theory) has been studied and applied in computer-based test for decades. However, almost of all these existing studies evaluated focus merely on test questions with text-based (or static text/graphic) type of presentation form illustrated exclusively. In this paper, we present our study on test questions using both…
Descriptors: Elementary School Students, Semantics, Difficulty Level, Item Response Theory

Clariana, Roy; Wallace, Patricia – British Journal of Educational Technology, 2002
Describes a study that seeks to confirm several key factors in computer-based versus paper-based assessment. Based on earlier research, the factors considered in this study of undergraduates include content familiarity; computer familiarity; competitiveness; and gender. Reports results of analysis of variance that showed the computer-based test…
Descriptors: Academic Achievement, Analysis of Variance, Computer Assisted Testing, Computer Attitudes

Hunt, Nicoll; Hughes, Janet; Rowe, Glenn – British Journal of Educational Technology, 2002
Describes the development of a tool, FACT (Formative Automated Computer Testing), to formatively assess information technology skills of college students in the United Kingdom. Topics include word processing competency; tests designed by tutors and delivered via a network; and results of an evaluation that showed students preferred automated…
Descriptors: Competence, Computer Assisted Testing, Computer Networks, Evaluation Methods