NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 33 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Sohee Kim; Ki Lynn Cole – International Journal of Testing, 2025
This study conducted a comprehensive comparison of Item Response Theory (IRT) linking methods applied to a bifactor model, examining their performance on both multiple choice (MC) and mixed format tests within the common item nonequivalent group design framework. Four distinct multidimensional IRT linking approaches were explored, consisting of…
Descriptors: Item Response Theory, Comparative Analysis, Models, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Lemmo, Alice – International Journal of Science and Mathematics Education, 2021
Comparative studies on paper and pencil--and computer-based tests principally focus on statistical analysis of students' performances. In educational assessment, comparing students' performance (in terms of right or wrong results) does not imply a comparison of problem-solving processes followed by students. In this paper, we present a theoretical…
Descriptors: Computer Assisted Testing, Comparative Analysis, Evaluation Methods, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Ford, Jeremy W.; Conoyer, Sarah J.; Lembke, Erica S.; Smith, R. Alex; Hosp, John L. – Assessment for Effective Intervention, 2018
In the present study, two types of curriculum-based measurement (CBM) tools in science, Vocabulary Matching (VM) and Statement Verification for Science (SV-S), a modified Sentence Verification Technique, were compared. Specifically, this study aimed to determine whether the format of information presented (i.e., SV-S vs. VM) produces differences…
Descriptors: Curriculum Based Assessment, Evaluation Methods, Measurement Techniques, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Shohamy, Elana; Tannenbaum, Michal; Gani, Anna – International Journal of Bilingual Education and Bilingualism, 2022
Notwithstanding the introduction of education multilingual policies worldwide, testing and assessment procedures still rely almost exclusively on the monolingual construct. This paper describes a study, part of a larger project fostering a new multilingual education policy in Israeli schools, exploring bi/multilingual assessment. It included two…
Descriptors: Scores, Comparative Analysis, Hebrew, Arabic
Peer reviewed Peer reviewed
Direct linkDirect link
Goertler, Senta; Gacs, Adam – Unterrichtspraxis/Teaching German, 2018
As online educational programs and courses increase (Allen & Seaman, [Allen, I. E., 2014]), it is important to understand the benefits and limitations of this delivery format when assessing students and when comparing learning outcomes. This article addresses the following two questions: (1) What are some of the best practices in assessing…
Descriptors: Online Courses, Second Language Instruction, Second Language Learning, German
Wolf, Raffaela – ProQuest LLC, 2013
Preservation of equity properties was examined using four equating methods--IRT True Score, IRT Observed Score, Frequency Estimation, and Chained Equipercentile--in a mixed-format test under a common-item nonequivalent groups (CINEG) design. Equating of mixed-format tests under a CINEG design can be influenced by factors such as attributes of the…
Descriptors: Testing, Item Response Theory, Equated Scores, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Kirschner, Sophie; Borowski, Andreas; Fischer, Hans E.; Gess-Newsome, Julie; von Aufschnaiter, Claudia – International Journal of Science Education, 2016
Teachers' professional knowledge is assumed to be a key variable for effective teaching. As teacher education has the goal to enhance professional knowledge of current and future teachers, this knowledge should be described and assessed. Nevertheless, only a limited number of studies quantitatively measures physics teachers' professional…
Descriptors: Evaluation Methods, Tests, Test Format, Science Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Ventouras, Errikos; Triantis, Dimos; Tsiakas, Panagiotis; Stergiopoulos, Charalampos – Computers & Education, 2011
The aim of the present research was to compare the use of multiple-choice questions (MCQs) as an examination method against the oral examination (OE) method. MCQs are widely used and their importance seems likely to grow, due to their inherent suitability for electronic assessment. However, MCQs are influenced by the tendency of examinees to guess…
Descriptors: Grades (Scholastic), Scoring, Multiple Choice Tests, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Evers, Arne – International Journal of Testing, 2012
In this article, the characteristics of five test review models are described. The five models are the US review system at the Buros Center for Testing, the German Test Review System of the Committee on Tests, the Brazilian System for the Evaluation of Psychological Tests, the European EFPA Review Model, and the Dutch COTAN Evaluation System for…
Descriptors: Program Evaluation, Test Reviews, Trend Analysis, International Education
Alemi, Minoo; Miraghaee, Apama – Journal on English Language Teaching, 2011
The present study was carried out to find out whether regular administration of cloze test improved the students' knowledge of grammar more than the multiple choice one. Subjects participating in this study were 84 Iranian pre-university students of Allameh-Gotb-e Ravandi University, aged between 18 and 35 and enrolled in a grammar course. To…
Descriptors: Foreign Countries, Comparative Analysis, Grammar, Knowledge Level
Peer reviewed Peer reviewed
Direct linkDirect link
Lafontaine, Dominique; Monseur, Christian – European Educational Research Journal, 2009
In this article we discuss how apparently indicators that may appear straightforward, such as gender differences, need to be interpreted with extreme care. In particular, we consider how the assessment framework, and the methodology of international surveys, may have a potential impact on the results and on the indicators. Through analysis of…
Descriptors: Foreign Countries, Reading Comprehension, Test Format, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Ventouras, Errikos; Triantis, Dimos; Tsiakas, Panagiotis; Stergiopoulos, Charalampos – Computers & Education, 2010
The aim of the present research was to compare the use of multiple-choice questions (MCQs) as an examination method, to the examination based on constructed-response questions (CRQs). Despite that MCQs have an advantage concerning objectivity in the grading process and speed in production of results, they also introduce an error in the final…
Descriptors: Computer Assisted Instruction, Scoring, Grading, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Klein, Esther Dominique; van Ackeren, Isabell – Studies in Educational Evaluation, 2011
Statewide exit examinations play an important role in discussions on school effectiveness. Referring to educational governance concepts, this paper presumes a relation between varying organizational structures of statewide examinations across states, and heterogeneous effects on school actors. It is assumed that their ability to affect work in…
Descriptors: Exit Examinations, Governance, School Effectiveness, Foreign Countries
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Fike, David S.; Doyle, Denise J.; Connelly, Robert J. – Journal of Effective Teaching, 2010
Evaluation of teaching effectiveness is considered a critical element in determining whether or not faculty members are retained at higher education institutions; academic milestones such as tenure and promotion often require documentation of the quality of faculty teaching. As methods of assessing teaching effectiveness evolve, concerns about the…
Descriptors: Online Surveys, Test Format, Delivery Systems, Student Evaluation of Teacher Performance
Peer reviewed Peer reviewed
Direct linkDirect link
Mogey, Nora; Paterson, Jessie; Burk, John; Purcell, Michael – ALT-J: Research in Learning Technology, 2010
Students at the University of Edinburgh do almost all their work on computers, but at the end of the semester they are examined by handwritten essays. Intuitively it would be appealing to allow students the choice of handwriting or typing, but this raises a concern that perhaps this might not be "fair"--that the choice a student makes,…
Descriptors: Handwriting, Essay Tests, Interrater Reliability, Grading
Previous Page | Next Page ยป
Pages: 1  |  2  |  3