Publication Date
In 2025 | 82 |
Since 2024 | 330 |
Since 2021 (last 5 years) | 1282 |
Since 2016 (last 10 years) | 2746 |
Since 2006 (last 20 years) | 4973 |
Descriptor
Test Items | 9400 |
Test Construction | 2673 |
Foreign Countries | 2122 |
Item Response Theory | 1843 |
Difficulty Level | 1597 |
Item Analysis | 1480 |
Test Validity | 1375 |
Test Reliability | 1152 |
Multiple Choice Tests | 1134 |
Scores | 1122 |
Computer Assisted Testing | 1040 |
More ▼ |
Source
Author
Publication Type
Education Level
Audience
Practitioners | 653 |
Teachers | 560 |
Researchers | 249 |
Students | 201 |
Administrators | 79 |
Policymakers | 21 |
Parents | 17 |
Counselors | 8 |
Community | 7 |
Support Staff | 3 |
Media Staff | 1 |
More ▼ |
Location
Canada | 223 |
Turkey | 221 |
Australia | 155 |
Germany | 114 |
United States | 97 |
Florida | 86 |
China | 84 |
Taiwan | 75 |
Indonesia | 73 |
United Kingdom | 70 |
Netherlands | 64 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Meets WWC Standards without Reservations | 4 |
Meets WWC Standards with or without Reservations | 4 |
Does not meet standards | 1 |
Li, Dongmei; Kapoor, Shalini – Educational Measurement: Issues and Practice, 2022
Population invariance is a desirable property of test equating which might not hold when significant changes occur in the test population, such as those brought about by the COVID-19 pandemic. This research aims to investigate whether equating functions are reasonably invariant when the test population is impacted by the pandemic. Based on…
Descriptors: Test Items, Equated Scores, COVID-19, Pandemics
Olney, Andrew M. – Grantee Submission, 2022
Multi-angle question answering models have recently been proposed that promise to perform related tasks like question generation. However, performance on related tasks has not been thoroughly studied. We investigate a leading model called Macaw on the task of multiple choice question generation and evaluate its performance on three angles that…
Descriptors: Test Construction, Multiple Choice Tests, Test Items, Models
Kim, Sooyeon; Walker, Michael E. – Educational Measurement: Issues and Practice, 2022
Test equating requires collecting data to link the scores from different forms of a test. Problems arise when equating samples are not equivalent and the test forms to be linked share no common items by which to measure or adjust for the group nonequivalence. Using data from five operational test forms, we created five pairs of research forms for…
Descriptors: Ability, Tests, Equated Scores, Testing Problems
Zyluk, Natalia; Karpe, Karolina; Urbanski, Mariusz – SAGE Open, 2022
The aim of this paper is to describe the process of modification of the research tool designed for measuring the development of personal epistemology--"Standardized Epistemological Understanding Assessment" (SEUA). SEUA was constructed as an improved version of the instrument initially proposed by Kuhn et al. SEUA was proved to be a more…
Descriptors: Epistemology, Research Tools, Beliefs, Test Items
Metsämuuronen, Jari – Practical Assessment, Research & Evaluation, 2022
The reliability of a test score is usually underestimated and the deflation may be profound, 0.40 - 0.60 units of reliability or 46 - 71%. Eight root sources of the deflation are discussed and quantified by a simulation with 1,440 real-world datasets: (1) errors in the measurement modelling, (2) inefficiency in the estimator of reliability within…
Descriptors: Test Reliability, Scores, Test Items, Correlation
Svihla, Vanessa; Gallup, Amber – Practical Assessment, Research & Evaluation, 2021
In making validity arguments, a central consideration is whether the instrument fairly and adequately covers intended content, and this is often evaluated by experts. While common procedures exist for quantitatively assessing this, the effect of loss aversion--a cognitive bias that would predict a tendency to retain items--on these procedures has…
Descriptors: Content Validity, Anxiety, Bias, Test Items
Stemler, Steven E.; Naples, Adam – Practical Assessment, Research & Evaluation, 2021
When students receive the same score on a test, does that mean they know the same amount about the topic? The answer to this question is more complex than it may first appear. This paper compares classical and modern test theories in terms of how they estimate student ability. Crucial distinctions between the aims of Rasch Measurement and IRT are…
Descriptors: Item Response Theory, Test Theory, Ability, Computation
Edwards, Ashley A.; Joyner, Keanan J.; Schatschneider, Christopher – Educational and Psychological Measurement, 2021
The accuracy of certain internal consistency estimators have been questioned in recent years. The present study tests the accuracy of six reliability estimators (Cronbach's alpha, omega, omega hierarchical, Revelle's omega, and greatest lower bound) in 140 simulated conditions of unidimensional continuous data with uncorrelated errors with varying…
Descriptors: Reliability, Computation, Accuracy, Sample Size
Bolt, Daniel M.; Liao, Xiangyi – Journal of Educational Measurement, 2021
We revisit the empirically observed positive correlation between DIF and difficulty studied by Freedle and commonly seen in tests of verbal proficiency when comparing populations of different mean latent proficiency levels. It is shown that a positive correlation between DIF and difficulty estimates is actually an expected result (absent any true…
Descriptors: Test Bias, Difficulty Level, Correlation, Verbal Tests
Cum, Sait – International Journal of Assessment Tools in Education, 2021
In this study, it was claimed that ROC analysis, which is used to determine to what extent medical diagnosis tests can be differentiated between patients and non-patients, can also be used to examine the discrimination of binary scored items in cognitive tests. In order to obtain various evidence for this claim, the 2x2 contingency table used in…
Descriptors: Test Items, Item Analysis, Discriminant Analysis, Item Response Theory
Cornelia Eva Neuert – Sociological Methods & Research, 2024
The quality of data in surveys is affected by response burden and questionnaire length. With an increasing number of questions, respondents can become bored, tired, and annoyed and may take shortcuts to reduce the effort needed to complete the survey. In this article, direct evidence is presented on how the position of items within a web…
Descriptors: Online Surveys, Test Items, Test Format, Test Construction
Corrin Moss; Sharon Kwabi; Scott P. Ardoin; Katherine S. Binder – Reading and Writing: An Interdisciplinary Journal, 2024
The ability to form a mental model of a text is an essential component of successful reading comprehension (RC), and purpose for reading can influence mental model construction. Participants were assigned to one of two conditions during an RC test to alter their purpose for reading: concurrent (texts and questions were presented simultaneously)…
Descriptors: Eye Movements, Reading Comprehension, Test Format, Short Term Memory
Maria Bolsinova; Jesper Tijmstra; Leslie Rutkowski; David Rutkowski – Journal of Educational and Behavioral Statistics, 2024
Profile analysis is one of the main tools for studying whether differential item functioning can be related to specific features of test items. While relevant, profile analysis in its current form has two restrictions that limit its usefulness in practice: It assumes that all test items have equal discrimination parameters, and it does not test…
Descriptors: Test Items, Item Analysis, Generalizability Theory, Achievement Tests
Erlina Fatkur Rohmah; Sukarmin; Daru Wahyuningsih – Pegem Journal of Education and Instruction, 2024
The study aimed to analyze the content validation of the STEM-integrated on thermal and transport concept inventory instrument used to measure the problem-solving abilities of high school students. The instrument questions developed amounted to nine description questions. This type of study is development research. The steps in this research are…
Descriptors: Content Validity, Measures (Individuals), Concept Formation, STEM Education
Eray Selçuk; Ergül Demir – International Journal of Assessment Tools in Education, 2024
This research aims to compare the ability and item parameter estimations of Item Response Theory according to Maximum likelihood and Bayesian approaches in different Monte Carlo simulation conditions. For this purpose, depending on the changes in the priori distribution type, sample size, test length, and logistics model, the ability and item…
Descriptors: Item Response Theory, Item Analysis, Test Items, Simulation