Publication Date
| In 2026 | 0 |
| Since 2025 | 197 |
| Since 2022 (last 5 years) | 1067 |
| Since 2017 (last 10 years) | 2577 |
| Since 2007 (last 20 years) | 4938 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 653 |
| Teachers | 563 |
| Researchers | 250 |
| Students | 201 |
| Administrators | 81 |
| Policymakers | 22 |
| Parents | 17 |
| Counselors | 8 |
| Community | 7 |
| Support Staff | 3 |
| Media Staff | 1 |
| More ▼ | |
Location
| Turkey | 225 |
| Canada | 223 |
| Australia | 155 |
| Germany | 116 |
| United States | 99 |
| China | 90 |
| Florida | 86 |
| Indonesia | 82 |
| Taiwan | 78 |
| United Kingdom | 73 |
| California | 65 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 4 |
| Meets WWC Standards with or without Reservations | 4 |
| Does not meet standards | 1 |
Zyluk, Natalia; Karpe, Karolina; Urbanski, Mariusz – SAGE Open, 2022
The aim of this paper is to describe the process of modification of the research tool designed for measuring the development of personal epistemology--"Standardized Epistemological Understanding Assessment" (SEUA). SEUA was constructed as an improved version of the instrument initially proposed by Kuhn et al. SEUA was proved to be a more…
Descriptors: Epistemology, Research Tools, Beliefs, Test Items
Metsämuuronen, Jari – Practical Assessment, Research & Evaluation, 2022
The reliability of a test score is usually underestimated and the deflation may be profound, 0.40 - 0.60 units of reliability or 46 - 71%. Eight root sources of the deflation are discussed and quantified by a simulation with 1,440 real-world datasets: (1) errors in the measurement modelling, (2) inefficiency in the estimator of reliability within…
Descriptors: Test Reliability, Scores, Test Items, Correlation
Cornelia Eva Neuert – Sociological Methods & Research, 2024
The quality of data in surveys is affected by response burden and questionnaire length. With an increasing number of questions, respondents can become bored, tired, and annoyed and may take shortcuts to reduce the effort needed to complete the survey. In this article, direct evidence is presented on how the position of items within a web…
Descriptors: Online Surveys, Test Items, Test Format, Test Construction
Corrin Moss; Sharon Kwabi; Scott P. Ardoin; Katherine S. Binder – Reading and Writing: An Interdisciplinary Journal, 2024
The ability to form a mental model of a text is an essential component of successful reading comprehension (RC), and purpose for reading can influence mental model construction. Participants were assigned to one of two conditions during an RC test to alter their purpose for reading: concurrent (texts and questions were presented simultaneously)…
Descriptors: Eye Movements, Reading Comprehension, Test Format, Short Term Memory
Maria Bolsinova; Jesper Tijmstra; Leslie Rutkowski; David Rutkowski – Journal of Educational and Behavioral Statistics, 2024
Profile analysis is one of the main tools for studying whether differential item functioning can be related to specific features of test items. While relevant, profile analysis in its current form has two restrictions that limit its usefulness in practice: It assumes that all test items have equal discrimination parameters, and it does not test…
Descriptors: Test Items, Item Analysis, Generalizability Theory, Achievement Tests
Erlina Fatkur Rohmah; Sukarmin; Daru Wahyuningsih – Pegem Journal of Education and Instruction, 2024
The study aimed to analyze the content validation of the STEM-integrated on thermal and transport concept inventory instrument used to measure the problem-solving abilities of high school students. The instrument questions developed amounted to nine description questions. This type of study is development research. The steps in this research are…
Descriptors: Content Validity, Measures (Individuals), Concept Formation, STEM Education
Eray Selçuk; Ergül Demir – International Journal of Assessment Tools in Education, 2024
This research aims to compare the ability and item parameter estimations of Item Response Theory according to Maximum likelihood and Bayesian approaches in different Monte Carlo simulation conditions. For this purpose, depending on the changes in the priori distribution type, sample size, test length, and logistics model, the ability and item…
Descriptors: Item Response Theory, Item Analysis, Test Items, Simulation
Hyunjung Lee; Heining Cham – Educational and Psychological Measurement, 2024
Determining the number of factors in exploratory factor analysis (EFA) is crucial because it affects the rest of the analysis and the conclusions of the study. Researchers have developed various methods for deciding the number of factors to retain in EFA, but this remains one of the most difficult decisions in the EFA. The purpose of this study is…
Descriptors: Factor Structure, Factor Analysis, Monte Carlo Methods, Goodness of Fit
Meike Akveld; George Kinnear – International Journal of Mathematical Education in Science and Technology, 2024
Many universities use diagnostic tests to assess incoming students' preparedness for mathematics courses. Diagnostic test results can help students to identify topics where they need more practice and give lecturers a summary of strengths and weaknesses in their class. We demonstrate a process that can be used to make improvements to a mathematics…
Descriptors: Mathematics Tests, Diagnostic Tests, Test Items, Item Analysis
Marta Montenegro-Rueda; José María Fernández-Batanero – European Journal of Special Needs Education, 2024
The instruments for the evaluation of teachers' digital competence are abundant, however, there is still a lack of instruments oriented to the context of Special Education. In this sense, this study presents the validation process of an instrument that aims to determine the level of knowledge and digital competence of Special Education teachers…
Descriptors: Teacher Competencies, Technological Literacy, Special Education Teachers, Test Construction
Lauritz Schewior; Marlit Annalena Lindner – Educational Psychology Review, 2024
Studies have indicated that pictures in test items can impact item-solving performance, information processing (e.g., time on task) and metacognition as well as test-taking affect and motivation. The present review aims to better organize the existing and somewhat scattered research on multimedia effects in testing and problem solving while…
Descriptors: Multimedia Materials, Computer Assisted Testing, Test Items, Pictorial Stimuli
Hannes M. Körner; Franz Faul; Antje Nuthmann – Cognitive Research: Principles and Implications, 2024
Observers' memory for a person's appearance can be compromised by the presence of a weapon, a phenomenon known as the weapon-focus effect (WFE). According to the unusual-item hypothesis, attention shifts from the perpetrator to the weapon because a weapon is an unusual object in many contexts. To test this assumption, we monitored participants'…
Descriptors: Weapons, Eye Movements, Observation, Familiarity
Marjolein Muskens; Willem E. Frankenhuis; Lex Borghans – npj Science of Learning, 2024
In many countries, standardized math tests are important for achieving academic success. Here, we examine whether content of items, the story that explains a mathematical question, biases performance of low-SES students. In a large-scale cohort study of Trends in International Mathematics and Science Studies (TIMSS)--including data from 58…
Descriptors: Mathematics Tests, Standardized Tests, Test Items, Low Income Students
Ondrej Klíma; Martin Lakomý; Ekaterina Volevach – International Journal of Social Research Methodology, 2024
We tested the impacts of Hofstede's cultural factors and mode of administration on item nonresponse (INR) for political questions in the European Values Study (EVS). We worked with the integrated European Values Study dataset, using descriptive analysis and multilevel binary logistic regression models. We concluded that (1) modes of administration…
Descriptors: Cultural Influences, Testing, Test Items, Responses
Svihla, Vanessa; Gallup, Amber – Practical Assessment, Research & Evaluation, 2021
In making validity arguments, a central consideration is whether the instrument fairly and adequately covers intended content, and this is often evaluated by experts. While common procedures exist for quantitatively assessing this, the effect of loss aversion--a cognitive bias that would predict a tendency to retain items--on these procedures has…
Descriptors: Content Validity, Anxiety, Bias, Test Items

Peer reviewed
Direct link
