Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 19 |
Since 2006 (last 20 years) | 41 |
Descriptor
Difficulty Level | 79 |
Item Analysis | 79 |
Statistical Analysis | 79 |
Test Items | 60 |
Test Construction | 27 |
Comparative Analysis | 19 |
Foreign Countries | 19 |
Multiple Choice Tests | 17 |
Test Reliability | 16 |
Test Validity | 16 |
Achievement Tests | 13 |
More ▼ |
Source
Author
Benson, Jeri | 2 |
Bratfisch, Oswald | 2 |
Kostin, Irene | 2 |
Livingston, Samuel A. | 2 |
Adeleke, A. A. | 1 |
Agnello, Paul | 1 |
Alpayar, Cagla | 1 |
Alsma, Jelmer | 1 |
Arhin, Ato Kwamina | 1 |
Ariel, Robert | 1 |
Attali, Yigal | 1 |
More ▼ |
Publication Type
Education Level
Higher Education | 17 |
Postsecondary Education | 12 |
Secondary Education | 7 |
Elementary Education | 4 |
Middle Schools | 3 |
Elementary Secondary Education | 2 |
Grade 5 | 2 |
Grade 8 | 2 |
Grade 10 | 1 |
Grade 3 | 1 |
Grade 4 | 1 |
More ▼ |
Audience
Researchers | 1 |
Location
India | 3 |
Australia | 2 |
Japan | 2 |
Nigeria | 2 |
Turkey | 2 |
France | 1 |
Germany | 1 |
Ghana | 1 |
Minnesota | 1 |
Mississippi | 1 |
Netherlands | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
Graduate Record Examinations | 3 |
Test of English as a Foreign… | 3 |
Law School Admission Test | 1 |
Program for International… | 1 |
Stanford Binet Intelligence… | 1 |
Trends in International… | 1 |
United States Medical… | 1 |
What Works Clearinghouse Rating
Smith, Tamarah; Smith, Samantha – International Journal of Teaching and Learning in Higher Education, 2018
The Research Methods Skills Assessment (RMSA) was created to measure psychology majors' statistics knowledge and skills. The American Psychological Association's Guidelines for the Undergraduate Major in Psychology (APA, 2007, 2013) served as a framework for development. Results from a Rasch analysis with data from n = 330 undergraduates showed…
Descriptors: Psychology, Statistics, Undergraduate Students, Item Response Theory
Le Hebel, Florence; Montpied, Pascale; Tiberghien, Andrée; Fontanieu, Valérie – International Journal of Science Education, 2017
The understanding of what makes a question difficult is a crucial concern in assessment. To study the difficulty of test questions, we focus on the case of PISA, which assesses to what degree 15-year-old students have acquired knowledge and skills essential for full participation in society. Our research question is to identify PISA science item…
Descriptors: Achievement Tests, Foreign Countries, International Assessment, Secondary School Students
Young, Nicholas T.; Heckler, Andrew F. – Physical Review Physics Education Research, 2018
In the context of a generic harmonic oscillator, we investigated students' accuracy in determining the period, frequency, and angular frequency from mathematical and graphical representations. In a series of studies including interviews, free response tests, and multiple-choice tests developed in an iterative process, we assessed students in both…
Descriptors: Interviews, Accuracy, Multiple Choice Tests, Algebra
Agnello, Paul – ProQuest LLC, 2018
Pseudowords (words that are not real but resemble real words in a language) have been used increasingly as a technique to reduce contamination due to construct-irrelevant variance in assessments of verbal fluid reasoning (Gf). However, despite pseudowords being researched heavily in other psychology sub-disciplines, they have received little…
Descriptors: Scores, Intelligence Tests, Difficulty Level, Item Analysis
Ozturk, Nagihan Boztunc; Dogan, Nuri – Educational Sciences: Theory and Practice, 2015
This study aims to investigate the effects of item exposure control methods on measurement precision and on test security under various item selection methods and item pool characteristics. In this study, the Randomesque (with item group sizes of 5 and 10), Sympson-Hetter, and Fade-Away methods were used as item exposure control methods. Moreover,…
Descriptors: Computer Assisted Testing, Item Analysis, Statistical Analysis, Comparative Analysis
Pawade, Yogesh R.; Diwase, Dipti S. – Journal of Educational Technology, 2016
Item analysis of Multiple Choice Questions (MCQs) is the process of collecting, summarizing and utilizing information from students' responses to evaluate the quality of test items. Difficulty Index (p-value), Discrimination Index (DI) and Distractor Efficiency (DE) are the parameters which help to evaluate the quality of MCQs used in an…
Descriptors: Test Items, Item Analysis, Multiple Choice Tests, Curriculum Development
Quaigrain, Kennedy; Arhin, Ato Kwamina – Cogent Education, 2017
Item analysis is essential in improving items which will be used again in later tests; it can also be used to eliminate misleading items in a test. The study focused on item and test quality and explored the relationship between difficulty index (p-value) and discrimination index (DI) with distractor efficiency (DE). The study was conducted among…
Descriptors: Item Analysis, Teacher Developed Materials, Test Reliability, Educational Assessment
Facon, Bruno; Magis, David – Journal of Speech, Language, and Hearing Research, 2016
Purpose: An item analysis of Bishop's (1983) Test for Reception of Grammar (TROG) in its French version (F-TROG; Lecocq, 1996) was conducted to determine whether the difficulty of items is similar for participants with or without intellectual disability (ID). Method: In Study 1, responses to the 92 F-TROG items by 55 participants with Down…
Descriptors: Item Analysis, Grammar, Children, Adolescents
Sener, Nilay; Tas, Erol – Journal of Education and Learning, 2017
The purpose of this study is to prepare a multiple-choice achievement test with high reliability and validity for the "Let's Solve the Puzzle of Our Body" unit. For this purpose, a multiple choice achievement test consisting of 46 items was applied to 178 fifth grade students in total. As a result of the test and material analysis…
Descriptors: Achievement Tests, Grade 5, Science Instruction, Biology
Marie, S. Maria Josephine Arokia; Edannur, Sreekala – Journal of Educational Technology, 2015
This paper focused on the analysis of test items constructed in the paper of teaching Physical Science for B.Ed. class. It involved the analysis of difficulty level and discrimination power of each test item. Item analysis allows selecting or omitting items from the test, but more importantly item analysis is a tool to help the item writer improve…
Descriptors: Item Analysis, Relevance (Education), Standardized Tests, Achievement Tests
Stefanski, Katherine M.; Gardner, Grant E.; Seipelt-Thiemann, Rebecca L. – CBE - Life Sciences Education, 2016
Concept inventories (CIs) are valuable tools for educators that assess student achievement and identify misconceptions held by students. Results of student responses can be used to adjust or develop new instructional methods for a given topic. The regulation of gene expression in both prokaryotes and eukaryotes is an important concept in genetics…
Descriptors: Concept Formation, Misconceptions, Teaching Methods, Undergraduate Students
Alpayar, Cagla; Gulleroglu, H. Deniz – Educational Research and Reviews, 2017
The aim of this research is to determine whether students' test performance and approaches to test questions change based on the type of mathematics questions (visual or verbal) administered to them. This research is based on a mixed-design model. The quantitative data are gathered from 297 seventh grade students, attending seven different middle…
Descriptors: Foreign Countries, Middle School Students, Grade 7, Student Evaluation
Zhu, Mengxiao; Lee, Hee-Sun; Wang, Ting; Liu, Ou Lydia; Belur, Vinetha; Pallant, Amy – International Journal of Science Education, 2017
This study investigates the role of automated scoring and feedback in supporting students' construction of written scientific arguments while learning about factors that affect climate change in the classroom. The automated scoring and feedback technology was integrated into an online module. Students' written scientific argumentation occurred…
Descriptors: Science Instruction, Climate, Change, Persuasive Discourse
Chang, Mei-Lin; Engelhard, George, Jr. – Journal of Psychoeducational Assessment, 2016
The purpose of this study is to examine the psychometric quality of the Teachers' Sense of Efficacy Scale (TSES) with data collected from 554 teachers in a U.S. Midwestern state. The many-facet Rasch model was used to examine several potential contextual influences (years of teaching experience, school context, and levels of emotional exhaustion)…
Descriptors: Models, Teacher Attitudes, Self Efficacy, Item Response Theory
Trace, Jonathan; Brown, James Dean; Janssen, Gerriet; Kozhevnikova, Liudmila – Language Testing, 2017
Cloze tests have been the subject of numerous studies regarding their function and use in both first language and second language contexts (e.g., Jonz & Oller, 1994; Watanabe & Koyama, 2008). From a validity standpoint, one area of investigation has been the extent to which cloze tests measure reading ability beyond the sentence level.…
Descriptors: Cloze Procedure, Language Tests, Test Items, Item Analysis