NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
Pell Grant Program1
What Works Clearinghouse Rating
Showing 1 to 15 of 19 results Save | Export
Crystal Uminski – ProQuest LLC, 2023
The landscape of undergraduate biology education has been shaped by decades of reform efforts calling for instruction to integrate core concepts and scientific skills as a means of helping students become proficient in the discipline. Assessments can be used to make inferences about how these reform efforts have translated into changes in…
Descriptors: Undergraduate Students, Biology, Science Instruction, Science Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
David Bell; Vikki O'Neill; Vivienne Crawford – Practitioner Research in Higher Education, 2023
We compared the influence of open-book extended duration versus closed book time-limited format on reliability and validity of written assessments of pharmacology learning outcomes within our medical and dental courses. Our dental cohort undertake a mid-year test (30xfree-response short answer to a question, SAQ) and end-of-year paper (4xSAQ,…
Descriptors: Undergraduate Students, Pharmacology, Pharmaceutical Education, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Yangqiuting Li; Chandralekha Singh – Physical Review Physics Education Research, 2025
Research-based multiple-choice questions implemented in class with peer instruction have been shown to be an effective tool for improving students' engagement and learning outcomes. Moreover, multiple-choice questions that are carefully sequenced to build on each other can be particularly helpful for students to develop a systematic understanding…
Descriptors: Physics, Science Instruction, Science Tests, Multiple Choice Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Wicaksono, Azizul Ghofar Candra; Korom, Erzsébet – Participatory Educational Research, 2022
The accuracy of learning results relies on the evaluation and assessment. The learning goals, including problem solving ability must be aligned with the valid standardized measurement tools. The study on exploring the nature of problem-solving, framework, and assessment in the Indonesian context will make contributions to problem solving…
Descriptors: Problem Solving, Educational Research, Test Construction, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Walsh, Cole; Quinn, Katherine N.; Wieman, C.; Holmes, N. G. – Physical Review Physics Education Research, 2019
Introductory physics lab instruction is undergoing a transformation, with increasing emphasis on developing experimentation and critical thinking skills. These changes present a need for standardized assessment instruments to determine the degree to which students develop these skills through instructional labs. In this article, we present the…
Descriptors: Critical Thinking, Physics, Cognitive Tests, Science Experiments
Peer reviewed Peer reviewed
Direct linkDirect link
Karakolidis, Anastasios; O'Leary, Michael; Scully, Darina – International Journal of Testing, 2021
The linguistic complexity of many text-based tests can be a source of construct-irrelevant variance, as test-takers' performance may be affected by factors that are beyond the focus of the assessment itself, such as reading comprehension skills. This experimental study examined the extent to which the use of animated videos, as opposed to written…
Descriptors: Animation, Vignettes, Video Technology, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Smolinsky, Lawrence; Marx, Brian D.; Olafsson, Gestur; Ma, Yanxia A. – Journal of Educational Computing Research, 2020
Computer-based testing is an expanding use of technology offering advantages to teachers and students. We studied Calculus II classes for science, technology, engineering, and mathematics majors using different testing modes. Three sections with 324 students employed: paper-and-pencil testing, computer-based testing, and both. Computer tests gave…
Descriptors: Test Format, Computer Assisted Testing, Paper (Material), Calculus
Wielicki, Tom – International Association for Development of the Information Society, 2016
This paper reports on longitudinal study regarding integrity of testing in an online format as used by e-learning platforms. Specifically, this study explains whether online testing, which implies an open book format is compromising integrity of assessment by encouraging cheating among students. Statistical experiment designed for this study…
Descriptors: Integrity, Online Courses, Statistical Surveys, Longitudinal Studies
Peer reviewed Peer reviewed
Direct linkDirect link
Zhang, Xijuan; Savalei, Victoria – Educational and Psychological Measurement, 2016
Many psychological scales written in the Likert format include reverse worded (RW) items in order to control acquiescence bias. However, studies have shown that RW items often contaminate the factor structure of the scale by creating one or more method factors. The present study examines an alternative scale format, called the Expanded format,…
Descriptors: Factor Structure, Psychological Testing, Alternative Assessment, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Read, John; von Randow, Janet – International Journal of English Studies, 2013
The increasingly diverse language backgrounds of their students are creating new challenges for English-medium universities. One response in Australian and New Zealand institutions has been to introduce post-entry language assessment (PELA) to identify incoming students who need to enhance their academic language ability. One successful example of…
Descriptors: Foreign Countries, Language Tests, Academic Discourse, Diagnostic Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Darby, Jenny A. – Journal of European Industrial Training, 2007
Purpose--The purpose of this research is to examine participants' response rate on dual style training course evaluation forms. These combine structured and open-ended formats. Pencil and paper forms have a long history of use by trainers in business and commerce and more recently in education. Research methods texts tend to have neglected the…
Descriptors: Research Methodology, Course Evaluation, Response Rates (Questionnaires), Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Osterlind, Steven J.; Miao, Danmin; Sheng, Yanyan; Chia, Rosina C. – International Journal of Testing, 2004
This study investigated the interaction between different cultural groups and item type, and the ensuing effect on construct validity for a psychological inventory, the Myers-Briggs Type Indicator (MBTI, Form G). The authors analyzed 94 items from 2 Chinese-translated versions of the MBTI (Form G) for factorial differences among groups of…
Descriptors: Test Format, Undergraduate Students, Cultural Differences, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Xu, Yuejin; Iran-Nejad, Asghar; Thoma, Stephen J. – Journal of Interactive Online Learning, 2007
The purpose of the study was to determine comparability of an online version to the original paper-pencil version of Defining Issues Test 2 (DIT2). This study employed methods from both Classical Test Theory (CTT) and Item Response Theory (IRT). Findings from CTT analyses supported the reliability and discriminant validity of both versions.…
Descriptors: Computer Assisted Testing, Test Format, Comparative Analysis, Test Theory
Peer reviewed Peer reviewed
Schriesheim, Chester A.; And Others – Educational and Psychological Measurement, 1991
Effects of item wording on questionnaire reliability and validity were studied, using 280 undergraduate business students who completed a questionnaire comprising 4 item types: (1) regular; (2) polar opposite; (3) negated polar opposite; and (4) negated regular. Implications of results favoring regular and negated regular items are discussed. (SLD)
Descriptors: Business Education, Comparative Testing, Higher Education, Negative Forms (Language)
Peer reviewed Peer reviewed
Caudill, Steven B.; Gropper, Daniel M. – Journal of Economic Education, 1991
Presents a study of the effect of question order on student performance on economics tests. Reports that question order has no statistically significant effect on examination scores, even after including variables that reflect differential human capital characteristics. Concludes that instructors need not worry that some examination versions give…
Descriptors: Economics Education, Educational Research, Higher Education, Human Capital
Previous Page | Next Page »
Pages: 1  |  2