NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)0
Since 2006 (last 20 years)16
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 17 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Alsubait, Tahani; Parsia, Bijan; Sattler, Uli – Research in Learning Technology, 2012
Different computational models for generating analogies of the form "A is to B as C is to D" have been proposed over the past 35 years. However, analogy generation is a challenging problem that requires further research. In this article, we present a new approach for generating analogies in Multiple Choice Question (MCQ) format that can be used…
Descriptors: Computer Assisted Testing, Programming, Computer Software, Computer Software Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Chao, K.-J.; Hung, I.-C.; Chen, N.-S. – Journal of Computer Assisted Learning, 2012
Online learning has been rapidly developing in the last decade. However, there is very little literature available about the actual adoption of online synchronous assessment approaches and any guidelines for effective assessment design and implementation. This paper aims at designing and evaluating the possibility of applying online synchronous…
Descriptors: Electronic Learning, Student Evaluation, Online Courses, Computer Software
Peer reviewed Peer reviewed
Direct linkDirect link
Fan, Ya-Ching; Wang, Tzu-Hua; Wang, Kuo-Hua – Computers & Education, 2011
This research investigates the effect of a web-based model, named "Practicing, Reflecting, and Revising with Web-based Assessment and Test Analysis system (P2R-WATA) Assessment Literacy Development Model," on enhancing assessment knowledge and perspectives of secondary in-service teachers, and adopts a single group experimental research…
Descriptors: Research Design, Test Items, Summer Programs, Prior Learning
Behrens, John T.; Mislevy, Robert J.; DiCerbo, Kristen E.; Levy, Roy – National Center for Research on Evaluation, Standards, and Student Testing (CRESST), 2010
The world in which learning and assessment must take place is rapidly changing. The digital revolution has created a vast space of interconnected information, communication, and interaction. Functioning effectively in this environment requires so-called 21st century skills such as technological fluency, complex problem solving, and the ability to…
Descriptors: Evidence, Student Evaluation, Educational Assessment, Influence of Technology
Peer reviewed Peer reviewed
Direct linkDirect link
Yin, Alexander C.; Volkwein, J. Fredericks – New Directions for Institutional Research, 2010
In both purpose and practice, general education in American higher education has experienced several recurring debates and national revivals. In a world with constantly evolving technology, students need a strong general education to be flexible and adaptable to the changes of the world. General education is an important component and requirement…
Descriptors: Institutional Research, General Education, Accreditation (Institutions), Definitions
Peer reviewed Peer reviewed
Direct linkDirect link
Chu, Hui-Chun; Hwang, Gwo-Jen; Huang, Yueh-Min – Innovations in Education and Teaching International, 2010
Conventional testing systems usually give students a score as their test result, but do not show them how to improve their learning performance. Researchers have indicated that students would benefit more if individual learning guidance could be provided. However, most of the existing learning diagnosis models ignore the fact that one concept…
Descriptors: Test Results, Teaching Methods, Elementary School Students, Elementary School Teachers
Peer reviewed Peer reviewed
Direct linkDirect link
McPherson, Douglas – Interactive Technology and Smart Education, 2009
Purpose: The purpose of this paper is to describe how and why Texas A&M University at Qatar (TAMUQ) has developed a system aiming to effectively place students in freshman and developmental English programs. The placement system includes: triangulating data from external test scores, with scores from a panel-marked hand-written essay (HWE),…
Descriptors: Student Placement, Educational Testing, English (Second Language), Second Language Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Johnson, Martin; Nadas, Rita – Learning, Media and Technology, 2009
Within large scale educational assessment agencies in the UK, there has been a shift towards assessors marking digitally scanned copies rather than the original paper scripts that were traditionally used. This project uses extended essay examination scripts to consider whether the mode in which an essay is read potentially influences the…
Descriptors: Reading Comprehension, Educational Assessment, Internet, Essay Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Almond, Russell G.; DiBello, Louis V.; Moulder, Brad; Zapata-Rivera, Juan-Diego – Journal of Educational Measurement, 2007
This paper defines Bayesian network models and examines their applications to IRT-based cognitive diagnostic modeling. These models are especially suited to building inference engines designed to be synchronous with the finer grained student models that arise in skills diagnostic assessment. Aspects of the theory and use of Bayesian network models…
Descriptors: Inferences, Models, Item Response Theory, Cognitive Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Marks, Anthony M.; Cronje, Johannes C. – Educational Technology & Society, 2008
Computer-based assessments are becoming more commonplace, perhaps as a necessity for faculty to cope with large class sizes. These tests often occur in large computer testing venues in which test security may be compromised. In an attempt to limit the likelihood of cheating in such venues, randomised presentation of items is automatically…
Descriptors: Educational Assessment, Educational Testing, Research Needs, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Al-A'ali, Mansoor – Educational Technology & Society, 2007
Computer adaptive testing is the study of scoring tests and questions based on assumptions concerning the mathematical relationship between examinees' ability and the examinees' responses. Adaptive student tests, which are based on item response theory (IRT), have many advantages over conventional tests. We use the least square method, a…
Descriptors: Educational Testing, Higher Education, Elementary Secondary Education, Student Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Scalise, Kathleen; Gifford, Bernard – Journal of Technology, Learning, and Assessment, 2006
Technology today offers many new opportunities for innovation in educational assessment through rich new assessment tasks and potentially powerful scoring, reporting and real-time feedback mechanisms. One potential limitation for realizing the benefits of computer-based assessment in both instructional assessment and large scale testing comes in…
Descriptors: Electronic Learning, Educational Assessment, Information Technology, Classification
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Dikli, Semire – Journal of Technology, Learning, and Assessment, 2006
Automated Essay Scoring (AES) is defined as the computer technology that evaluates and scores the written prose (Shermis & Barrera, 2002; Shermis & Burstein, 2003; Shermis, Raymat, & Barrera, 2003). AES systems are mainly used to overcome time, cost, reliability, and generalizability issues in writing assessment (Bereiter, 2003; Burstein,…
Descriptors: Scoring, Writing Evaluation, Writing Tests, Standardized Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Attali, Yigal; Burstein, Jill – Journal of Technology, Learning, and Assessment, 2006
E-rater[R] has been used by the Educational Testing Service for automated essay scoring since 1999. This paper describes a new version of e-rater (V.2) that is different from other automated essay scoring systems in several important respects. The main innovations of e-rater V.2 are a small, intuitive, and meaningful set of features used for…
Descriptors: Educational Testing, Test Scoring Machines, Scoring, Writing Evaluation
Tatsuoka, Kikumi K. – 1985
This paper introduces a probabilistic model that is capable of diagnosing and classifying cognitive errors in a general problem-solving domain. Item response theory is used to deal with the variability of response errors. Responses from a 38 item fraction addition test given to 595 junior high school students are used to illustrate the model.…
Descriptors: Artificial Intelligence, Cognitive Processes, Computer Assisted Testing, Computer Software
Previous Page | Next Page ยป
Pages: 1  |  2