Publication Date
| In 2026 | 0 |
| Since 2025 | 222 |
| Since 2022 (last 5 years) | 1091 |
| Since 2017 (last 10 years) | 2601 |
| Since 2007 (last 20 years) | 4962 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 653 |
| Teachers | 563 |
| Researchers | 250 |
| Students | 201 |
| Administrators | 81 |
| Policymakers | 22 |
| Parents | 17 |
| Counselors | 8 |
| Community | 7 |
| Support Staff | 3 |
| Media Staff | 1 |
| More ▼ | |
Location
| Turkey | 227 |
| Canada | 223 |
| Australia | 155 |
| Germany | 116 |
| United States | 99 |
| China | 90 |
| Florida | 86 |
| Indonesia | 82 |
| Taiwan | 78 |
| United Kingdom | 73 |
| California | 66 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 4 |
| Meets WWC Standards with or without Reservations | 4 |
| Does not meet standards | 1 |
Beck, Christina; Nerdel, Claudia – Contributions from Science Education Research, 2019
Dealing with multiple external representations (MERs) in science education is the key to students' understanding of science communication and becoming scientifically literate. It is generally accepted that learning scientific concepts, processes, and principles requires understanding and interacting with MERs. Science can be understood as a…
Descriptors: Biology, Science Instruction, Models, Visual Aids
Lorenceau, Adrien; Marec, Camille; Mostafa, Tarek – OECD Publishing, 2019
This paper explains the rationale for updating the OECD Programme for International Student Assessment (PISA) 2021 questionnaire on Information and Communication Technology (ICT) and shows how it covers policy topics of current relevance. After presenting key findings based on previous ICT-related PISA data, the paper provides a summary of the…
Descriptors: Questionnaires, Information Technology, Achievement Tests, Secondary School Students
Sarah Lindstrom Johnson; Ray E. Reichenberg; Kathan Shukla; Tracy E. Waasdorp; Catherine P. Bradshaw – Grantee Submission, 2019
The United States government has become increasingly focused on school climate, as recently evidenced by its inclusion as an accountability indicator in the "Every Student Succeeds Act". Yet, there remains considerable variability in both conceptualizing and measuring school climate. To better inform the research and practice related to…
Descriptors: Item Response Theory, Educational Environment, Accountability, Educational Legislation
Pishghadam, Reza; Baghaei, Purya; Seyednozadi, Zahra – International Journal of Testing, 2017
This article attempts to present emotioncy as a potential source of test bias to inform the analysis of test item performance. Emotioncy is defined as a hierarchy, ranging from "exvolvement" (auditory, visual, and kinesthetic) to "involvement" (inner and arch), to emphasize the emotions evoked by the senses. This study…
Descriptors: Test Bias, Item Response Theory, Test Items, Psychological Patterns
Loukina, Anastassia; Zechner, Klaus; Yoon, Su-Youn; Zhang, Mo; Tao, Jidong; Wang, Xinhao; Lee, Chong Min; Mulholland, Matthew – ETS Research Report Series, 2017
This report presents an overview of the "SpeechRater"? automated scoring engine model building and evaluation process for several item types with a focus on a low-English-proficiency test-taker population. We discuss each stage of speech scoring, including automatic speech recognition, filtering models for nonscorable responses, and…
Descriptors: Automation, Scoring, Speech Tests, Test Items
Hohensinn, Christine; Baghaei, Purya – Psicologica: International Journal of Methodology and Experimental Psychology, 2017
In large scale multiple-choice (MC) tests alternate forms of a test may be developed to prevent cheating by changing the order of items or by changing the position of the response options. The assumption is that since the content of the test forms are the same the order of items or the positions of the response options do not have any effect on…
Descriptors: Multiple Choice Tests, Test Format, Test Items, Difficulty Level
Wise, Steven L. – Educational Measurement: Issues and Practice, 2017
The rise of computer-based testing has brought with it the capability to measure more aspects of a test event than simply the answers selected or constructed by the test taker. One behavior that has drawn much research interest is the time test takers spend responding to individual multiple-choice items. In particular, very short response…
Descriptors: Guessing (Tests), Multiple Choice Tests, Test Items, Reaction Time
Carney, Michele B.; Cavey, Laurie; Hughes, Gwyneth – Elementary School Journal, 2017
This article illustrates an argument-based approach to presenting validity evidence for assessment items intended to measure a complex construct. Our focus is developing a measure of teachers' ability to analyze and respond to students' mathematical thinking for the purpose of program evaluation. Our validity argument consists of claims addressing…
Descriptors: Mathematics Instruction, Mathematical Logic, Thinking Skills, Evidence
Foss, Donald J.; Pirozzolo, Joseph W. – Journal of Educational Psychology, 2017
We carried out 4 semester-long studies of student performance in a college research methods course (total N = 588). Two sections of it were taught each semester with systematic and controlled differences between them. Key manipulations were repeated (with some variation) across the 4 terms, allowing assessment of replicability of effects.…
Descriptors: Undergraduate Students, Student Evaluation, Testing, Incidence
Zaidi, Nikki L.; Swoboda, Christopher M.; Kelcey, Benjamin M.; Manuel, R. Stephen – Advances in Health Sciences Education, 2017
The extant literature has largely ignored a potentially significant source of variance in multiple mini-interview (MMI) scores by "hiding" the variance attributable to the sample of attributes used on an evaluation form. This potential source of hidden variance can be defined as rating items, which typically comprise an MMI evaluation…
Descriptors: Interviews, Scores, Generalizability Theory, Monte Carlo Methods
Sahin, Alper; Anil, Duygu – Educational Sciences: Theory and Practice, 2017
This study investigates the effects of sample size and test length on item-parameter estimation in test development utilizing three unidimensional dichotomous models of item response theory (IRT). For this purpose, a real language test comprised of 50 items was administered to 6,288 students. Data from this test was used to obtain data sets of…
Descriptors: Test Length, Sample Size, Item Response Theory, Test Construction
Byram, Jessica N.; Seifert, Mark F.; Brooks, William S.; Fraser-Cotlin, Laura; Thorp, Laura E.; Williams, James M.; Wilson, Adam B. – Anatomical Sciences Education, 2017
With integrated curricula and multidisciplinary assessments becoming more prevalent in medical education, there is a continued need for educational research to explore the advantages, consequences, and challenges of integration practices. This retrospective analysis investigated the number of items needed to reliably assess anatomical knowledge in…
Descriptors: Anatomy, Science Tests, Test Items, Test Reliability
Lee, Wooyeol; Cho, Sun-Joo – Applied Measurement in Education, 2017
Utilizing a longitudinal item response model, this study investigated the effect of item parameter drift (IPD) on item parameters and person scores via a Monte Carlo study. Item parameter recovery was investigated for various IPD patterns in terms of bias and root mean-square error (RMSE), and percentage of time the 95% confidence interval covered…
Descriptors: Item Response Theory, Test Items, Bias, Computation
Wang, Keyin – ProQuest LLC, 2017
The comparison of item-level computerized adaptive testing (CAT) and multistage adaptive testing (MST) has been researched extensively (e.g., Kim & Plake, 1993; Luecht et al., 1996; Patsula, 1999; Jodoin, 2003; Hambleton & Xing, 2006; Keng, 2008; Zheng, 2012). Various CAT and MST designs have been investigated and compared under the same…
Descriptors: Comparative Analysis, Computer Assisted Testing, Adaptive Testing, Test Items
Ajello, Anna Maria; Caponera, Elisa; Palmerio, Laura – European Journal of Psychology of Education, 2018
In Italy, from the 2003 reports to the present, the National Institute for the Educational Evaluation of Instruction and Training (INVALSI) has conducted research on Programme for International Student Assessment (PISA) results in order to understand Italian students' low achievement in mathematics. In the present paper, data from a representative…
Descriptors: Foreign Countries, Achievement Tests, International Assessment, Secondary School Students

Peer reviewed
Direct link
