NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers2
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 10 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Lonneke H. Schellekens; Marieke F. van der Schaaf; Cees P.M. van der Vleuten; Frans J. Prins; Saskia Wools; Harold G. J. Bok – Quality Assurance in Education: An International Perspective, 2023
Purpose: This study aims to report the design, development and evaluation of a digital quality assurance application aimed at improving and ensuring the quality of assessment programmes in higher education. Design/methodology/approach: The application was developed using a design-based research (DBR) methodology. The application's design was…
Descriptors: Computer Software, Computer System Design, Programming, Higher Education
Payne, J. Scott; San Pedro, Sweet Z.; Moore, Raeal; Sanchez, Edgar I. – ACT, Inc., 2020
High-stakes, standardized testing plays an important role in the lives of many students as they apply to college and compete for scholarships. In states where college readiness measures, like the ACT® test, are used for school and district accountability, the ACT scores of all students, whether college-bound or not, are a top priority for school…
Descriptors: Test Preparation, Testing Programs, State Programs, School Districts
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Wen-Chung; Chen, Hui-Fang; Jin, Kuan-Yu – Educational and Psychological Measurement, 2015
Many scales contain both positively and negatively worded items. Reverse recoding of negatively worded items might not be enough for them to function as positively worded items do. In this study, we commented on the drawbacks of existing approaches to wording effect in mixed-format scales and used bi-factor item response theory (IRT) models to…
Descriptors: Item Response Theory, Test Format, Language Usage, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Zhang, Mo; Breyer, F. Jay; Lorenz, Florian – ETS Research Report Series, 2013
In this research, we investigated the suitability of implementing "e-rater"® automated essay scoring in a high-stakes large-scale English language testing program. We examined the effectiveness of generic scoring and 2 variants of prompt-based scoring approaches. Effectiveness was evaluated on a number of dimensions, including agreement…
Descriptors: Computer Assisted Testing, Computer Software, Scoring, Language Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Islam, A. K. M. Najmul – Journal of Information Systems Education, 2011
This paper examines factors that influence the post-adoption satisfaction of educators with e-learning systems. Based on the expectation-confirmation framework, we propose a research model that demonstrates how post-adoption beliefs affect post-adoption satisfaction. The model was tested at a university by educators (n = 175) who use an e-learning…
Descriptors: Electronic Learning, Testing Programs, Participant Satisfaction, Teacher Attitudes
Peer reviewed Peer reviewed
Fisher, Sylvia Kay; Shibutani, Hirohide – Florida Journal of Educational Research, 1988
A generalizable systems-based needs analysis model was developed to help school district testing and evaluation offices evaluate current problems with their information processing systems and identify additional computer capabilities required to upgrade their systems. The model contains four main phases, namely: definition of the department…
Descriptors: Computer Networks, Computer Software, Elementary Secondary Education, Evaluation Methods
McGuire, Dennis P. – 1984
Efficient methods of using the Statistical Package for the Social Sciences (SPSS) to analyze National Assessment of Educational Progress (NAEP) data files are discussed. One error in the NAEP SPSS file is discussed, and another error (which may be system-dependent) is mentioned. In addition, purely mathematical methods are used to address the…
Descriptors: Achievement Tests, Computer Software, Educational Assessment, Elementary Secondary Education
Dabney, Marian E.; Stewart, Theadora – 1990
This study investigated the construct validity of the revised Special Education-Mental Handicaps Georgia Teacher Certification Test (MH-TCT) using hierarchical confirmatory factor analysis and LISREL VI. The primary objective was to determine whether first-order and second-order factors correspond to item/objective/test relationships defined by…
Descriptors: Computer Assisted Testing, Computer Software, Construct Validity, Content Validity
Varnhagen, Stanley; Calder, Peter W. – 1985
The Microcomputer Diagnostic Testing Project (MDTP) was designed to broaden the use of microcomputers in student testing, for elementary and secondary school use, and for a variety of test item formats. MDTP was designed to administer tests, to allow teachers to create or revise tests, to print out test forms when desired, to score tests, to…
Descriptors: Computer Assisted Testing, Computer Software, Educational Testing, Elementary Secondary Education
Kingston, Neal; And Others – 1985
A necessary prerequisite to the operational use of item response theory (IRT) in any testing program is the investigation of the feasibility of such an approach. This report presents the results of such research for the Graduate Management Admission Test (GMAT). Despite the fact that GMAT data appear to violate a basic assumption of the…
Descriptors: College Entrance Examinations, Computer Software, Correlation, Equated Scores