Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 9 |
Descriptor
Courseware | 55 |
Evaluation Criteria | 55 |
Evaluation Methods | 55 |
Computer Assisted Instruction | 31 |
Computer Software Evaluation | 20 |
Elementary Secondary Education | 13 |
Educational Technology | 11 |
Instructional Design | 11 |
Foreign Countries | 10 |
Microcomputers | 10 |
Check Lists | 8 |
More ▼ |
Source
Author
Publication Type
Education Level
Higher Education | 2 |
Postsecondary Education | 2 |
Early Childhood Education | 1 |
Elementary Secondary Education | 1 |
Secondary Education | 1 |
Location
Indiana | 2 |
Asia | 1 |
California | 1 |
Florida | 1 |
Germany | 1 |
Minnesota | 1 |
Netherlands | 1 |
Texas | 1 |
Turkey | 1 |
United Kingdom | 1 |
United Kingdom (England) | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Karolcík, Štefan; Cipková, Elena; Hrušecký, Roman; Veselský, Milan – Informatics in Education, 2015
Despite the fact that digital technologies are more and more used in the learning and education process, there is still lack of professional evaluation tools capable of assessing the quality of used digital teaching aids in a comprehensive and objective manner. Construction of the Comprehensive Evaluation of Electronic Learning Tools and…
Descriptors: Electronic Learning, Educational Technology, Courseware, Evaluation Methods
Lee, Jeong-Sook; Kim, Sung-Wan – Journal of Educational Computing Research, 2015
The purpose of this study is to develop and validate an evaluation tool of educational apps for smart education. Based on literature reviews, a potential model for evaluating educational apps was suggested. An evaluation tool consisting of 57 survey items was delivered to 156 students in middle and high schools. An exploratory factor analysis was…
Descriptors: Educational Technology, Courseware, Computer Software Evaluation, Test Construction
Stonehouse, Pauline; Keengwe, Jared – International Journal of Information and Communication Technology Education, 2013
The purpose of this study was, (a) to describe the introduction of mVAL software and Charlotte Danielson Rubrics (CDR) as teacher evaluation tools; (b) to compare the process and outcomes of the new initiative with traditional systems; and (c) to evaluate the software from the perspective of participants in the system. This study highlights the…
Descriptors: Public Schools, Teacher Effectiveness, Educational Technology, Teacher Evaluation
Lee, Cheng-Yuan; Cherner, Todd Sloan – Journal of Information Technology Education: Research, 2015
There is a pressing need for an evaluation rubric that examines all aspects of educational apps designed for instructional purposes. In past decades, many rubrics have been developed for evaluating educational computer-based programs; however, rubrics designed for evaluating the instructional implications of educational apps are scarce. When an…
Descriptors: Instructional Material Evaluation, Educational Technology, Scoring Rubrics, Evaluation Methods
Santoro, Lana Edwards; Bishop, M. J. – Computers in the Schools, 2010
It seems appropriate, if not necessary, to use empirically supported criteria to evaluate reading software applications. This study's purpose was to develop a research-based evaluation framework and review selected beginning reading software that might be used with struggling beginning readers. Thirty-one products were reviewed according to…
Descriptors: Beginning Reading, Emergent Literacy, Computer Software Evaluation, Computer Software Selection
Incikabi, Lutfi; Sancar Tokmak, Hatice – Educational Media International, 2012
This case study examined the educational software evaluation processes of pre-service teachers who attended either expertise-based training (XBT) or traditional training in conjunction with a Software-Evaluation checklist. Forty-three mathematics teacher candidates and three experts participated in the study. All participants evaluated educational…
Descriptors: Foreign Countries, Novices, Check Lists, Mathematics Education
Phillips, Rob; McNaught, Carmel; Kennedy, Gregor – Routledge, Taylor & Francis Group, 2011
How can the average educator who teaches online, without experience in evaluating emerging technologies, build on what is successful and modify what is not? Written for educators who feel ill-prepared when required to evaluate e-learning initiatives, "Evaluating e-Learning" offers step-by-step guidance for conducting an evaluation plan of…
Descriptors: Electronic Learning, Educational Research, Online Courses, Web Based Instruction
Strobl, Carola; Jacobs, Geert – Computer Assisted Language Learning, 2011
In this article, we set out to assess QuADEM (Quality Assessment of Digital Educational Material), one of the latest methods for evaluating online language learning courseware. What is special about QuADEM is that the evaluation is based on observing the actual usage of the online courseware and that, from a checklist of 12 different components,…
Descriptors: Foreign Countries, Electronic Learning, Video Technology, Feedback (Response)
Ounaies, Houda Zouari; Jamoussi, Yassine; Ben Ghezala, Henda Hajjami – Themes in Science and Technology Education, 2008
Currently, e-learning systems are mainly web-based applications and tackle a wide range of users all over the world. Fitting learners' needs is considered as a key issue to guaranty the success of these systems. Many researches work on providing adaptive systems. Nevertheless, evaluation of the adaptivity is still in an exploratory phase.…
Descriptors: Media Adaptation, Instructional Design, Electronic Learning, Management Information Systems
Copeland, Peter – 1988
The various criteria upon which interactive video courseware can be judged are outlined. The criteria relate to the critical areas of program usage, content, interactivity, production, presentation, design, and programming. The mnemonic HICUPPP is introduced to describe seven key areas that categorize criteria which can be used to evaluate…
Descriptors: Computer Assisted Instruction, Courseware, Evaluation Criteria, Evaluation Methods
Criswell, Eleanor L.; Swezey, Robert W. – Educational Technology, 1984
Describes the general principles of a nonexperimental learning theory-based courseware evaluation which calls attention to instructional sequences in the courseware to determine if the sequence is programed around learning principles. The results of using this type of evaluation for two computerized training devices are summarized. (MBR)
Descriptors: Check Lists, Courseware, Definitions, Evaluation Criteria
Northwest Regional Educational Lab., Portland, OR. – 1986
This guide developed by MicroSIFT, a clearinghouse for microcomputer-based educational software and courseware, provides background information and forms to aid teachers and other educators in evaluating available microcomputer courseware. The evaluation process comprises six stages: (1) sifting, which screens out those programs that are not…
Descriptors: Computer Assisted Instruction, Courseware, Elementary Secondary Education, Evaluation Criteria
Ashmore, Timothy M. – 1984
The relative worth of any evaluation instrument depends on user needs and the material being evaluated. Potential users of a good computer assisted instruction (CAI) instrument include purchasers, authors, reviewers, and publishers of CAI materials. Although each user has unique needs, a good instrument will serve both to educate and discriminate,…
Descriptors: Computer Assisted Instruction, Courseware, Evaluation Criteria, Evaluation Methods

Bangert-Drowns, Robert L.; Kozma, Robert B. – Journal of Research on Computing in Education, 1989
Describes assessment procedures used to select winners of the EDUCOM/NCRIPTAL (National Center for Research to Improve Postsecondary Teaching and Learning) Higher Education Software Awards program; presents the evaluative criteria used for software assessment; and lists the award-winning software for 1987. (32 references) (LRW)
Descriptors: Awards, Computer Assisted Instruction, Courseware, Evaluation Criteria
Harrison, Colin – 1984
Differences in sets of criteria for evaluating microcomputer software are discussed. They are set against the results of three studies in which teachers in the United Kingdom evaluated five programs which were used in reading or English lessons. A comparison of the checklist criteria with the case study data was made using Stake's (1967) matrix of…
Descriptors: Case Studies, Check Lists, Courseware, Evaluation Criteria