NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
Higher Education Act…1
Assessments and Surveys
What Works Clearinghouse Rating
Showing 1 to 15 of 52 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Ricardo Conejo Muñoz; Beatriz Barros Blanco; José del Campo-Ávila; José L. Triviño Rodriguez – IEEE Transactions on Learning Technologies, 2024
Automatic question generation and the assessment of procedural knowledge is still a challenging research topic. This article focuses on the case of it, the techniques of parsing grammars for compiler construction. There are two well-known techniques for parsing: top-down parsing with LL(1) and bottom-up with LR(1). Learning these techniques and…
Descriptors: Automation, Questioning Techniques, Knowledge Level, Language
Peer reviewed Peer reviewed
Direct linkDirect link
Han, Chao – Language Testing, 2022
Over the past decade, testing and assessing spoken-language interpreting has garnered an increasing amount of attention from stakeholders in interpreter education, professional certification, and interpreting research. This is because in these fields assessment results provide a critical evidential basis for high-stakes decisions, such as the…
Descriptors: Translation, Language Tests, Testing, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Anita Pásztor-Kovács; Attila Pásztor; Gyöngyvér Molnár – Interactive Learning Environments, 2023
In this paper, we present an agenda for the research directions we recommend in addressing the issues of realizing and evaluating communication in CPS instruments. We outline our ideas on potential ways to improve: (1) generalizability in Human-Human assessment tools and ecological validity in Human-Agent ones; (2) flexible and convenient use of…
Descriptors: Cooperation, Problem Solving, Evaluation Methods, Teamwork
Laura K. Allen; Arthur C. Grasser; Danielle S. McNamara – Grantee Submission, 2023
Assessments of natural language can provide vast information about individuals' thoughts and cognitive process, but they often rely on time-intensive human scoring, deterring researchers from collecting these sources of data. Natural language processing (NLP) gives researchers the opportunity to implement automated textual analyses across a…
Descriptors: Psychological Studies, Natural Language Processing, Automation, Research Methodology
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Tong Li; Sarah D. Creer; Tracy Arner; Rod D. Roscoe; Laura K. Allen; Danielle S. McNamara – Grantee Submission, 2022
Automated writing evaluation (AWE) tools can facilitate teachers' analysis of and feedback on students' writing. However, increasing evidence indicates that writing instructors experience challenges in implementing AWE tools successfully. For this reason, our development of the Writing Analytics Tool (WAT) has employed a participatory approach…
Descriptors: Automation, Writing Evaluation, Learning Analytics, Participatory Research
Peer reviewed Peer reviewed
Direct linkDirect link
C. Sean Burns; Jennifer Pusateri; Daniela K. DiGiacomo – Journal of Education for Library and Information Science, 2025
This paper presents a novel approach to designing an online, open-source course in systems librarianship, an area of librarianship that may be perceived as complex and intimidating because of the technologies involved. The course design focuses on making systems librarianship more approachable for library and information science students who may…
Descriptors: Library Science, Online Courses, Open Education, Instructional Design
Peer reviewed Peer reviewed
Direct linkDirect link
O'Leary, Michael; Scully, Darina; Karakolidis, Anastasios; Pitsia, Vasiliki – European Journal of Education, 2018
The role of digital technology in assessment has received a great deal of attention in recent years. Naturally, technology offers many practical benefits, such as increased efficiency with regard to the design, implementation and scoring of existing assessments. More importantly, it also has the potential to have profound, transformative effects…
Descriptors: Computer Assisted Testing, Educational Technology, Technology Uses in Education, Evaluation Methods
Ivaniushin, Dmitrii A.; Shtennikov, Dmitrii G.; Efimchick, Eugene A.; Lyamin, Andrey V. – International Association for Development of the Information Society, 2016
This paper describes an approach to use automated assessments in online courses. Open edX platform is used as the online courses platform. The new assessment type uses Scilab as learning and solution validation tool. This approach allows to use automated individual variant generation and automated solution checks without involving the course…
Descriptors: Online Courses, Evaluation Methods, Computer Assisted Testing, Large Group Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Munoz, Carlos; Garcia-Penalvo, Francisco J.; Morales, Erla Mariela; Conde, Miguel Angel; Seoane, Antonio M. – International Journal of Distance Education Technologies, 2012
Automation toward efficiency is the aim of most intelligent systems in an educational context in which results calculation automation that allows experts to spend most of their time on important tasks, not on retrieving, ordering, and interpreting information. In this paper, the authors provide a tool that easily evaluates Learning Objects quality…
Descriptors: Evaluation Methods, Teaching Methods, Automation, Educational Resources
Peer reviewed Peer reviewed
Direct linkDirect link
Samuels, Ruth Gallegos; Griffy, Henry – portal: Libraries and the Academy, 2012
This article discusses best practices for evaluating open source software for use in library projects, based on the authors' experience evaluating electronic publishing solutions. First, it presents a brief review of the literature, emphasizing the need to evaluate open source solutions carefully in order to minimize Total Cost of Ownership. Next,…
Descriptors: Electronic Publishing, Computer Software, Comparative Analysis, Open Source Technology
Peer reviewed Peer reviewed
Direct linkDirect link
Georgouli, Katerina; Guerreiro, Pedro – International Journal on E-Learning, 2011
This paper presents the successful integration of the evaluation engine of Mooshak into the open source learning management system Claroline. Mooshak is an open source online automatic judge that has been used for international and national programming competitions. although it was originally designed for programming competitions, Mooshak has also…
Descriptors: Foreign Countries, Electronic Learning, Programming, Internet
Balas, Janet L. – Computers in Libraries, 2005
In days that seem very long ago, when the card catalog was the only way to search a library's collection, there was very little debate about the usability of the catalog. Librarians were most concerned about properly formatting catalog cards and, of course, ensuring that the cards were filed correctly. Today, the card catalog has been replaced in…
Descriptors: Electronic Libraries, Library Automation, Internet, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Joy, Mike; Griffiths, Nathan; Boyatt, Russell – Journal on Educational Resources in Computing, 2005
Computer programming lends itself to automated assessment. With appropriate software tools, program correctness can be measured, along with an indication of quality according to a set of metrics. Furthermore, the regularity of program code allows plagiarism detection to be an integral part of the tools that support assessment. In this paper, we…
Descriptors: Plagiarism, Evaluation Methods, Programming, Feedback (Response)
Peer reviewed Peer reviewed
Direct linkDirect link
Lyons, Lucy E. – Journal of Academic Librarianship, 2005
Over 500 libraries have employed OCLC's iCAS and its successor Automated Collection Assessment and Analysis Services (ACAS) as bibliometric tools to evaluate monograph collections. This examination of ACAS reveals both its methodological limitations and its feasibility as an indicator of collecting patterns. The results can be used to maximize the…
Descriptors: Library Materials, Library Science, Publications, Library Automation
Peer reviewed Peer reviewed
Salton, Gerard; And Others – Information Processing & Management, 1997
Discussion of the use of information retrieval techniques for automatic generation of semantic hypertext links focuses on automatic text summarization. Topics include World Wide Web links, text segmentation, and evaluation of text summarization by comparing automatically generated abstracts with manually prepared abstracts. (Author/LRW)
Descriptors: Abstracts, Automation, Comparative Analysis, Evaluation Methods
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4