NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 4 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Tokmak, Hatice Sancar; Incikabi, Lutfi; Yelken, Tugba Yanpar – Australasian Journal of Educational Technology, 2012
This comparative case study investigated the educational software evaluation processes of both experts and novices in conjunction with a software evaluation checklist. Twenty novice elementary education students, divided into groups of five, and three experts participated. Each novice group and the three experts evaluated educational software…
Descriptors: Observation, Content Analysis, Focus Groups, Case Studies
Peer reviewed Peer reviewed
Direct linkDirect link
Unal, Zafer; Bodur, Yasar; Unal, Aslihan – Journal of Information Technology Education: Research, 2012
Current literature provides many examples of rubrics that are used to evaluate the quality of web-quest designs. However, reliability of these rubrics has not yet been researched. This is the first study to fully characterize and assess the reliability of a webquest evaluation rubric. The ZUNAL rubric was created to utilize the strengths of the…
Descriptors: Scoring Rubrics, Test Reliability, Test Construction, Evaluation Criteria
Peer reviewed Peer reviewed
Direct linkDirect link
Incikabi, Lutfi; Sancar Tokmak, Hatice – Educational Media International, 2012
This case study examined the educational software evaluation processes of pre-service teachers who attended either expertise-based training (XBT) or traditional training in conjunction with a Software-Evaluation checklist. Forty-three mathematics teacher candidates and three experts participated in the study. All participants evaluated educational…
Descriptors: Foreign Countries, Novices, Check Lists, Mathematics Education
Peer reviewed Peer reviewed
Direct linkDirect link
Kay, Robin H.; Knaack, Liesel – Australasian Journal of Educational Technology, 2008
While discussion of the criteria needed to assess learning objects has been extensive, a formal, systematic model for evaluation has yet to be thoroughly tested. The purpose of the following study was to develop and assess a multi-component model for evaluating learning objects. The Learning Object Evaluation Metric (LOEM) was developed from a…
Descriptors: Foreign Countries, Models, Measurement Techniques, Evaluation Criteria