NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 15 results Save | Export
Santi Lestari – Research Matters, 2024
Despite the increasing ubiquity of computer-based tests, many general qualifications examinations remain in a paper-based mode. Insufficient and unequal digital provision across schools is often identified as a major barrier to a full adoption of computer-based exams for general qualifications. One way to overcome this barrier is a gradual…
Descriptors: Keyboarding (Data Entry), Handwriting, Test Format, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Nazerian, Samaneh; Abbasian, Gholam-Reza; Mohseni, Ahmad – Cogent Education, 2021
Despite growing interest in the studies on Zone of Proximal Development (ZPD), its operation in the forms of individualized and group-wide has been controversial. To cast some empirical light on the issue, this study was designed to study the applicability of the two scenarios of ZPD-based instructions to the writing accuracy of two levels of…
Descriptors: Sociocultural Patterns, Second Language Learning, Second Language Instruction, English (Second Language)
Peer reviewed Peer reviewed
Direct linkDirect link
Laurie, Robert; Bridglall, Beatrice L.; Arseneault, Patrick – SAGE Open, 2015
The effect of using a computer or paper and pencil on student writing scores on a provincial standardized writing assessment was studied. A sample of 302 francophone students wrote a short essay using a computer equipped with Microsoft Word with all of its correction functions enabled. One week later, the same students wrote a second short essay…
Descriptors: Writing Evaluation, Writing Tests, Computer Assisted Testing, Writing Achievement
Peer reviewed Peer reviewed
Direct linkDirect link
Charman, Melody – British Journal of Educational Technology, 2014
This small-scale pilot study aimed to establish how the mode of response in an examination affects candidates' performances on items that require an extended answer. The sample comprised 46 17-year-old students from two classes (one in a state secondary school and one in a state sixth-form college), who sat a mock A-level English Literature…
Descriptors: Computer Assisted Testing, Exit Examinations, English Literature, Secondary School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Ramineni, Chaitanya – Assessing Writing, 2013
In this paper, I describe the design and evaluation of automated essay scoring (AES) models for an institution's writing placement program. Information was gathered on admitted student writing performance at a science and technology research university in the northeastern United States. Under timed conditions, first-year students (N = 879) were…
Descriptors: Validity, Comparative Analysis, Internet, Student Placement
Peer reviewed Peer reviewed
Direct linkDirect link
Condon, William – Assessing Writing, 2013
Automated Essay Scoring (AES) has garnered a great deal of attention from the rhetoric and composition/writing studies community since the Educational Testing Service began using e-rater[R] and the "Criterion"[R] Online Writing Evaluation Service as products in scoring writing tests, and most of the responses have been negative. While the…
Descriptors: Measurement, Psychometrics, Evaluation Methods, Educational Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Prvinchandar, Sunita; Ayub, Ahmad Fauzi Mohd – English Language Teaching, 2014
This study compared the effectiveness of two types of computer software for improving the English writing skills of pupils in a Malaysian primary school. Sixty students who participated in the seven-week training course were divided into two groups, with the experimental group using the StyleWriter software and the control group using the…
Descriptors: Writing Skills, Courseware, Writing Improvement, Elementary School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Johnson, Martin; Nadas, Rita; Bell, John F. – British Journal of Educational Technology, 2010
There is a growing body of research literature that considers how the mode of assessment, either computer-based or paper-based, might affect candidates' performances. Despite this, there is a fairly narrow literature that shifts the focus of attention to those making assessment judgements and which considers issues of assessor consistency when…
Descriptors: English Literature, Examiners, Evaluation Research, Evaluators
Peer reviewed Peer reviewed
Direct linkDirect link
Mogey, Nora; Paterson, Jessie; Burk, John; Purcell, Michael – ALT-J: Research in Learning Technology, 2010
Students at the University of Edinburgh do almost all their work on computers, but at the end of the semester they are examined by handwritten essays. Intuitively it would be appealing to allow students the choice of handwriting or typing, but this raises a concern that perhaps this might not be "fair"--that the choice a student makes,…
Descriptors: Handwriting, Essay Tests, Interrater Reliability, Grading
Peer reviewed Peer reviewed
Direct linkDirect link
Coniam, David – ReCALL, 2009
This paper describes a study of the computer essay-scoring program BETSY. While the use of computers in rating written scripts has been criticised in some quarters for lacking transparency or lack of fit with how human raters rate written scripts, a number of essay rating programs are available commercially, many of which claim to offer comparable…
Descriptors: Writing Tests, Scoring, Foreign Countries, Interrater Reliability
Lee, Yong-Won; Breland, Hunter; Muraki, Eiji – 2002
Since the writing section of the Test of English as a Foreign Language (TOEFL) computer based test (CBT) is a single-prompt essay test, it is very important to ensure that each prompt is as fair as possible to any subgroups of examinees, such as those with different native language backgrounds. A particular topic of interest in this study is the…
Descriptors: Comparative Analysis, Computer Assisted Testing, English (Second Language), Essay Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Koul, Ravinder; Clariana, Roy B.; Salehi, Roya – Journal of Educational Computing Research, 2005
This article reports the results of an investigation of the convergent criterion-related validity of two computer-based tools for scoring concept maps and essays as part of the ongoing formative evaluation of these tools. In pairs, participants researched a science topic online and created a concept map of the topic. Later, participants…
Descriptors: Scoring, Essay Tests, Test Validity, Formative Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Attali, Yigal – ETS Research Report Series, 2007
This study examined the construct validity of the "e-rater"® automated essay scoring engine as an alternative to human scoring in the context of TOEFL® essay writing. Analyses were based on a sample of students who repeated the TOEFL within a short time period. Two "e-rater" scores were investigated in this study, the first…
Descriptors: Construct Validity, Computer Assisted Testing, Scoring, English (Second Language)
Anderson, Paul S. – 1987
Seven formats of educational testing were compared for student test preferences and how well each evaluated learning. The formats were: (1) true/false; (2) multiple choice; (3) matching; (4) MDT Multiple Digit Testing, in which a machine scores fill-in-the-blanks; (5) fill-in-the-blanks; (6) short answers; and (7) essay. A total of 1,440 survey…
Descriptors: College Students, Comparative Analysis, Computer Assisted Testing, Essay Tests
Peer reviewed Peer reviewed
Anderson, Paul S. – International Journal of Educology, 1988
Seven formats of educational testing were compared according to student preferences/perceptions of how well each test method evaluates learning. Formats compared include true/false, multiple-choice, matching, multi-digit testing (MDT), fill-in-the-blank, short answer, and essay. Subjects were 1,440 university students. Results indicate that tests…
Descriptors: Achievement Tests, College Students, Comparative Analysis, Computer Assisted Testing