NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)0
Since 2006 (last 20 years)13
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 14 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Asassfeh, Sahail; Al-Ebous, Hana'; Khwaileh, Faisal; Al-Zoubi, Zohair – Educational Studies, 2014
This study is the first to address student evaluation of faculty members (SFE) from a student perspective at a major Jordanian public university using a comprehensive (71-item) questionnaire administered to 620 undergraduates. Addressed are students' perceptions of the SFE process in terms of: (a) their paper-based vs. online-format preferences;…
Descriptors: Foreign Countries, Student Evaluation of Teacher Performance, Universities, Student Attitudes
Foy, Pierre, Ed.; Arora, Alka, Ed.; Stanco, Gabrielle M., Ed. – International Association for the Evaluation of Educational Achievement, 2013
This supplement describes national adaptations made to the international version of the TIMSS 2011 background questionnaires. This information provides users with a guide to evaluate the availability of internationally comparable data for use in secondary analyses involving the TIMSS 2011 background variables. Background questionnaire adaptations…
Descriptors: Questionnaires, Technology Transfer, Adoption (Ideas), Media Adaptation
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Jungtae; Craig, Daniel A. – Computer Assisted Language Learning, 2012
Videoconferencing offers new opportunities for language testers to assess speaking ability in low-stakes diagnostic tests. To be considered a trusted testing tool in language testing, a test should be examined employing appropriate validation processes [Chapelle, C.A., Jamieson, J., & Hegelheimer, V. (2003). "Validation of a web-based ESL…
Descriptors: Speech Communication, Testing, Language Tests, Construct Validity
Kunkle, Wanda M. – ProQuest LLC, 2010
Many students experience difficulties learning to program. They find learning to program in the object-oriented paradigm particularly challenging. As a result, computing educators have tried a variety of instructional methods to assist beginning programmers. These include developing approaches geared specifically toward novices and experimenting…
Descriptors: Computer Science Education, Programming Languages, Language of Instruction, Academic Achievement
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Do-Hong; Huynh, Huynh – Educational and Psychological Measurement, 2008
The current study compared student performance between paper-and-pencil testing (PPT) and computer-based testing (CBT) on a large-scale statewide end-of-course English examination. Analyses were conducted at both the item and test levels. The overall results suggest that scores obtained from PPT and CBT were comparable. However, at the content…
Descriptors: Reading Comprehension, Computer Assisted Testing, Factor Analysis, Comparative Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Jaehnig, Wendy; Miller, Matthew L. – Psychological Record, 2007
Research about the effectiveness of different types of feedback in programmed instruction was investigated. Knowledge of results had the least data to support its efficacy. Knowledge of correct responding (KCR) has been shown to be effective in several studies. Elaboration feedback is more effective than KCR, but may require more time of the…
Descriptors: Teaching Methods, Instructional Design, Feedback, Instructional Effectiveness
Walt, Nancy; Atwood, Kristin; Mann, Alex – Journal of Technology, Learning, and Assessment, 2008
The purpose of this study was to determine whether or not survey medium (electronic versus paper format) has a significant effect on the results achieved. To compare survey media, responses from elementary students to British Columbia's Satisfaction Survey were analyzed. Although this study was not experimental in design, the data set served as a…
Descriptors: Student Attitudes, Factor Analysis, Foreign Countries, Elementary School Students
Kim, Do-Hong; Huynh, Huynh – Journal of Technology, Learning, and Assessment, 2007
This study examined comparability of student scores obtained from computerized and paper-and-pencil formats of the large-scale statewide end-of-course (EOC) examinations in the two subject areas of Algebra and Biology. Evidence in support of comparability of computerized and paper-based tests was sought by examining scale scores, item parameter…
Descriptors: Computer Assisted Testing, Measures (Individuals), Biology, Algebra
Puhan, Gautam; Boughton, Keith; Kim, Sooyeon – Journal of Technology, Learning, and Assessment, 2007
The study evaluated the comparability of two versions of a certification test: a paper-and-pencil test (PPT) and computer-based test (CBT). An effect size measure known as Cohen's d and differential item functioning (DIF) analyses were used as measures of comparability at the test and item levels, respectively. Results indicated that the effect…
Descriptors: Computer Assisted Testing, Effect Size, Test Bias, Mathematics Tests
Wang, Jinhao; Brown, Michelle Stallone – Journal of Technology, Learning, and Assessment, 2007
The current research was conducted to investigate the validity of automated essay scoring (AES) by comparing group mean scores assigned by an AES tool, IntelliMetric [TM] and human raters. Data collection included administering the Texas version of the WriterPlacer "Plus" test and obtaining scores assigned by IntelliMetric [TM] and by…
Descriptors: Test Scoring Machines, Scoring, Comparative Testing, Intermode Differences
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Chamberlin, Michelle T.; Powers, Robert A. – Issues in the Undergraduate Mathematics Preparation of School Teachers, 2007
Faced with selecting a geometry curriculum for our preservice elementary teacher mathematics course, we used a mixed-methods study to investigate the effectiveness, with respect to student achievement and student perception, of three reform-oriented curricula. ANCOVA results indicate students using one of the curricula scored significantly higher…
Descriptors: Textbook Selection, Geometry, Elementary School Mathematics, Elementary School Teachers
Peer reviewed Peer reviewed
Direct linkDirect link
Lai, Ah-Fur; Chen, Deng-Jyi; Chen, Shu-Ling – Journal of Educational Multimedia and Hypermedia, 2008
The IRT (Item Response Theory) has been studied and applied in computer-based test for decades. However, almost of all these existing studies evaluated focus merely on test questions with text-based (or static text/graphic) type of presentation form illustrated exclusively. In this paper, we present our study on test questions using both…
Descriptors: Elementary School Students, Semantics, Difficulty Level, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Randolph, Justus J.; Virnes, Marjo; Jormanainen, Ilkka; Eronen, Pasi J. – Educational Technology & Society, 2006
Although computer-assisted interview tools have much potential, little empirical evidence on the quality and quantity of data generated by these tools has been collected. In this study we compared the effects of using Virre, a computer-assisted self-interview tool, with the effects of using other data collection methods, such as written responding…
Descriptors: Computer Science Education, Effect Size, Data Collection, Computer Assisted Testing
Peer reviewed Peer reviewed
Schuldberg, David – Computers in Human Behavior, 1988
Describes study that investigated the effects of computerized test administration on undergraduates' responses to the Minnesota Multiphasic Personality Inventory (MMPI), and discusses methodological considerations important in evaluating the sensitivity of personality inventories in different administration formats. Results analyze the effects of…
Descriptors: Analysis of Variance, Comparative Testing, Computer Assisted Testing, Higher Education