NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Practitioners2
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Showing 1 to 15 of 21 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Mike Perkins; Jasper Roe; Binh H. Vu; Darius Postma; Don Hickerson; James McGaughran; Huy Q. Khuat – International Journal of Educational Technology in Higher Education, 2024
This study investigates the efficacy of six major Generative AI (GenAI) text detectors when confronted with machine-generated content modified to evade detection (n = 805). We compare these detectors to assess their reliability in identifying AI-generated text in educational settings, where they are increasingly used to address academic integrity…
Descriptors: Artificial Intelligence, Inclusion, Computer Software, Word Processing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Mayo Beltrán, Alba Mª; Fernández Sánchez, María Jesús; Montanero Fernández, Manuel; Martín Parejo, David – Practical Assessment, Research & Evaluation, 2022
This study compares the effects of two resources, a paper rubric (CR) or the comment bubbles from a word processor (CCB), to support peer co-evaluation of expository texts in primary education. A total of 57 students wrote a text which, after a peer co-evaluation process, was rewritten. To analyze the improvements in the texts, we used a rubric…
Descriptors: Scoring Rubrics, Evaluation Methods, Word Processing, Computer Software
Peer reviewed Peer reviewed
Direct linkDirect link
Connell, Louise; Lynott, Dermot – Cognition, 2012
Abstract concepts are traditionally thought to differ from concrete concepts by their lack of perceptual information, which causes them to be processed more slowly and less accurately than perceptually-based concrete concepts. In two studies, we examined this assumption by comparing concreteness and imageability ratings to a set of perceptual…
Descriptors: Language Processing, Olfactory Perception, Word Processing, Reaction Time
Peer reviewed Peer reviewed
Direct linkDirect link
Varank, Ilhan; Erkoç, M. Fatih; Büyükimdat, Meryem Köskeroglu; Aktas, Mehmet; Yeni, Sabiha; Adigüzel, Tufan; Cömert, Zafer; Esgin, Esad – EURASIA Journal of Mathematics, Science & Technology Education, 2014
The purpose of this study was to investigate the effectiveness of an online automated evaluation and feedback system that assessed students' word processing assignments prepared with Microsoft Office Word. The participants of the study were 119 undergraduate teacher education students, 86 of whom were female and 32 were male, enrolled in different…
Descriptors: Computer Literacy, Introductory Courses, Student Evaluation, Feedback (Response)
Peer reviewed Peer reviewed
Direct linkDirect link
Jayal, Ambikesh; Shepperd, Martin – Journal on Educational Resources in Computing, 2009
In this article we explore a problematic aspect of automated assessment of diagrams. Diagrams have partial and sometimes inconsistent semantics. Typically much of the meaning of a diagram resides in the labels; however, the choice of labeling is largely unrestricted. This means a correct solution may utilize differing yet semantically equivalent…
Descriptors: Spelling, Semantics, Problem Solving, Word Processing
Peer reviewed Peer reviewed
Direct linkDirect link
Arntzen, Erik; Halstadtro, Lill-Beathe; Halstadtro, Monica – Analysis of Verbal Behavior, 2009
The purpose of the study was to extend the literature on verbal self-regulation by using the "silent dog" method to evaluate the role of verbal regulation over nonverbal behavior in 2 individuals with autism. Participants were required to talk-aloud while performing functional computer tasks.Then the effects of distracters with increasing demands…
Descriptors: Autism, Males, Self Control, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Mogey, Nora; Paterson, Jessie; Burk, John; Purcell, Michael – ALT-J: Research in Learning Technology, 2010
Students at the University of Edinburgh do almost all their work on computers, but at the end of the semester they are examined by handwritten essays. Intuitively it would be appealing to allow students the choice of handwriting or typing, but this raises a concern that perhaps this might not be "fair"--that the choice a student makes,…
Descriptors: Handwriting, Essay Tests, Interrater Reliability, Grading
Peer reviewed Peer reviewed
Ostwald, Tina; Stulz, Karin – Journal of Education for Business, 1996
A criterion-based strategy to evaluate student performance in computer application courses requires modification of evaluation tools with each technological change. Students benefit by not competing for grades and there is more consistency in student assessment across courses and semesters. (Author/JOW)
Descriptors: Business Education, Criterion Referenced Tests, Evaluation Methods, Minimum Competencies
Peer reviewed Peer reviewed
Smith, David; Keep, Rosslyn – Educational Research, 1986
Children aged 6 to 14 years in 10 schools in southern England were interviewed to determine the criteria that children use to evaluate educational and other software. Data suggest that the children were mature and sophisticated in their judgments. Their evaluative criteria derived from the standards of mass consumer electronics (home computer…
Descriptors: Children, Computer Software, Educational Games, Elementary Education
Peer reviewed Peer reviewed
Rushinek, Avi; Rushinek, Sara – Office Systems Research Journal, 1984
Describes results of a system rating study in which users responded to WPS (word processing software) questions. Study objectives were data collection and evaluation of variables; statistical quantification of WPS's contribution (along with other variables) to user satisfaction; design of an expert system to evaluate WPS; and database update and…
Descriptors: Artificial Intelligence, Computer Software, Evaluation Methods, Information Retrieval
Starr, Douglas P. – Collegiate Microcomputer, 1991
Describes a method that writing instructors can use for evaluating and commenting on college students' composition papers that uses a stand-alone word processor. Software is discussed, the instructor's role is explained, methods of evaluation and marking papers are suggested, and electronic grading is described. (three references) (LRW)
Descriptors: Computer Software, Evaluation Methods, Grading, Higher Education
Wetzel, Keith – Computing Teacher, 1985
Discusses need for development of keyboarding skills at the elementary school level; issues to be addressed when developing keyboarding curricula (criterion for competence, how much is necessary, time needed, who should teach and how); and program considerations (hardware, curriculum, principles of instruction, instructional periods, classroom or…
Descriptors: Class Organization, Curriculum Development, Elementary Education, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Krucli, Thomas E. – English Journal, 2004
A high school teacher has created a more effective method of responding to students' papers by using the tools that are available in common word-processing programs and widely available technology. Assessment has become a powerful tool for instruction and student self-reflection as the response of most of the students is positive towards…
Descriptors: Feedback, Secondary School Teachers, High School Students, Educational Technology
Peer reviewed Peer reviewed
Direct linkDirect link
Breland, Hunter; Lee, Yong-Won; Muraki, Eiji – Educational and Psychological Measurement, 2005
Eighty-three Test of English as a Foreign Language (TOEFL) writing prompts administered via computer-based testing between July 1998 and August 2000 were examined for differences attributable to the response mode (handwriting or word processing) chosen by examinees. Differences were examined statistically using polytomous logistic regression. A…
Descriptors: Evaluation Methods, Word Processing, Handwriting, Effect Size
Ransdell, Sarah; McCloskey, Michael – Collegiate Microcomputer, 1992
Describes a teaching technique that uses word processing software to automate the process of providing feedback about psychology assignments in a microcomputer-based laboratory course. Automation of the feedback process is explained, benefits to students and instructors are discussed, and student reactions are reported. (six references) (LRW)
Descriptors: Check Lists, Computer Assisted Instruction, Computer Software, Evaluation Methods
Previous Page | Next Page »
Pages: 1  |  2