NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 33 results Save | Export
Sebastian Moncaleano – ProQuest LLC, 2021
The growth of computer-based testing over the last two decades has motivated the creation of innovative item formats. It is often argued that technology-enhanced items (TEIs) provide better measurement of test-takers' knowledge, skills, and abilities by increasing the authenticity of tasks presented to test-takers (Sireci & Zenisky, 2006).…
Descriptors: Computer Assisted Testing, Test Format, Test Items, Classification
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Yu; Chiu, Chia-Yi; Köhn, Hans Friedrich – Journal of Educational and Behavioral Statistics, 2023
The multiple-choice (MC) item format has been widely used in educational assessments across diverse content domains. MC items purportedly allow for collecting richer diagnostic information. The effectiveness and economy of administering MC items may have further contributed to their popularity not just in educational assessment. The MC item format…
Descriptors: Multiple Choice Tests, Nonparametric Statistics, Test Format, Educational Assessment
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Fadillah, Sarah Meilani; Ha, Minsu; Nuraeni, Eni; Indriyanti, Nurma Yunita – Malaysian Journal of Learning and Instruction, 2023
Purpose: Researchers discovered that when students were given the opportunity to change their answers, a majority changed their responses from incorrect to correct, and this change often increased the overall test results. What prompts students to modify their answers? This study aims to examine the modification of scientific reasoning test, with…
Descriptors: Science Tests, Multiple Choice Tests, Test Items, Decision Making
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Delican, Burak – International Journal of Curriculum and Instruction, 2022
In this research, the questions in the Turkish Course (2,3,4) Worksheets were examined in terms of various classification systems. In this direction, the questions in the worksheets were evaluated with the document-material analysis technique in accordance with the structure of the qualitative research. During the research process, Turkish Course…
Descriptors: Worksheets, Elementary School Students, Turkish, Classification
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Malec, Wojciech; Krzeminska-Adamek, Malgorzata – Practical Assessment, Research & Evaluation, 2020
The main objective of the article is to compare several methods of evaluating multiple-choice options through classical item analysis. The methods subjected to examination include the tabulation of choice distribution, the interpretation of trace lines, the point-biserial correlation, the categorical analysis of trace lines, and the investigation…
Descriptors: Comparative Analysis, Evaluation Methods, Multiple Choice Tests, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Açikgül Firat, Esra; Köksal, Mustafa S. – Biochemistry and Molecular Biology Education, 2019
In this study, a 'biotechnology literacy test' was developed to determine the biotechnology literacy of prospective science teachers, and its validity and reliability were determined. For this purpose, 42 items were prepared by considering Bybee's scientific literacy classifications (nominal, functional, procedural, and multidimensional). The…
Descriptors: Test Construction, Multiple Choice Tests, Science Teachers, Preservice Teachers
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Smith, J. Alexander; Dickinson, John R. – International Journal for Business Education, 2017
Published banks of multiple-choice questions are ubiquitous, the questions in those banks often being classified into levels of difficulty. The specific level of difficulty into which a question is classified might or should be a function of the question's substance. Possibly, though, insubstantive aspects of the question, such as the incidence of…
Descriptors: Correlation, Multiple Choice Tests, Difficulty Level, Classification
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Yusof, Safiah Md; Lim, Tick Meng; Png, Leo; Khatab, Zainuriyah Abd; Singh, Harvinder Kaur Dharam – Journal of Learning for Development, 2017
Open University Malaysia (OUM) is progressively moving towards implementing assessment on demand and online assessment. This move is deemed necessary for OUM to continue to be the leading provider of flexible learning. OUM serves a very large number of students each semester and these students are vastly distributed throughout the country. As the…
Descriptors: Foreign Countries, Computer Assisted Testing, Computer Managed Instruction, Management Systems
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Rakes, Christopher R.; Ronau, Robert N. – International Journal of Research in Education and Science, 2019
The present study examined the ability of content domain (algebra, geometry, rational number, probability) to classify mathematics misconceptions. The study was conducted with 1,133 students in 53 algebra and geometry classes taught by 17 teachers from three high schools and one middle school across three school districts in a Midwestern state.…
Descriptors: Mathematics Instruction, Secondary School Teachers, Middle School Teachers, Misconceptions
Peer reviewed Peer reviewed
Direct linkDirect link
Young, Arthur; Shawl, Stephen J. – Astronomy Education Review, 2013
Professors who teach introductory astronomy to students not majoring in science desire them to comprehend the concepts and theories that form the basis of the science. They are usually less concerned about the myriad of detailed facts and information that accompanies the science. As such, professors prefer to test the students for such…
Descriptors: Multiple Choice Tests, Classification, Astronomy, Introductory Courses
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Solnyshkina, Marina I.; Harkova, Elena V.; Kiselnikov, Aleksander S. – English Language Teaching, 2014
The article summarizes the study of Reading Comprehension Tasks utilized in preparation for Unified (Russian) State Exam. The corpus of reading tasks was analyzed with the use of the classification algorithm developed by Weir and Urquhart (1998), and aimed at determining the level of engagement (local or global) and type of engagement (literal or…
Descriptors: Reading Comprehension, Reading Tests, Multiple Choice Tests, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Luebke, Stephen; Lorie, James – Journal of Applied Testing Technology, 2013
This article is a brief account of the use of Bloom's Taxonomy of Educational Objectives (Bloom, Engelhart, Furst, Hill, & Krathwohl, 1956) by staff of the Law School Admission Council in the 1990 development of redesigned specifications for the Reading Comprehension section of the Law School Admission Test. Summary item statistics for the…
Descriptors: Classification, Educational Objectives, Reading Comprehension, Law Schools
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Jihyun; Corter, James E. – Applied Psychological Measurement, 2011
Diagnosis of misconceptions or "bugs" in procedural skills is difficult because of their unstable nature. This study addresses this problem by proposing and evaluating a probability-based approach to the diagnosis of bugs in children's multicolumn subtraction performance using Bayesian networks. This approach assumes a causal network relating…
Descriptors: Misconceptions, Probability, Children, Subtraction
Peer reviewed Peer reviewed
Direct linkDirect link
Revuelta, Javier – Psychometrika, 2010
A comprehensive analysis of difficulty for multiple-choice items requires information at different levels: the test, the items, and the alternatives. This paper introduces a new parameterization of the nominal categories model (NCM) for analyzing difficulty at these three levels. The new parameterization is referred to as the NE-NCM and is…
Descriptors: Classification, Short Term Memory, Multiple Choice Tests, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Wen-Chung; Huang, Sheng-Yun – Educational and Psychological Measurement, 2011
The one-parameter logistic model with ability-based guessing (1PL-AG) has been recently developed to account for effect of ability on guessing behavior in multiple-choice items. In this study, the authors developed algorithms for computerized classification testing under the 1PL-AG and conducted a series of simulations to evaluate their…
Descriptors: Computer Assisted Testing, Classification, Item Analysis, Probability
Previous Page | Next Page »
Pages: 1  |  2  |  3