Publication Date
In 2025 | 82 |
Since 2024 | 330 |
Since 2021 (last 5 years) | 1282 |
Since 2016 (last 10 years) | 2746 |
Since 2006 (last 20 years) | 4973 |
Descriptor
Test Items | 9400 |
Test Construction | 2673 |
Foreign Countries | 2122 |
Item Response Theory | 1843 |
Difficulty Level | 1597 |
Item Analysis | 1480 |
Test Validity | 1375 |
Test Reliability | 1152 |
Multiple Choice Tests | 1134 |
Scores | 1122 |
Computer Assisted Testing | 1040 |
More ▼ |
Source
Author
Publication Type
Education Level
Audience
Practitioners | 653 |
Teachers | 560 |
Researchers | 249 |
Students | 201 |
Administrators | 79 |
Policymakers | 21 |
Parents | 17 |
Counselors | 8 |
Community | 7 |
Support Staff | 3 |
Media Staff | 1 |
More ▼ |
Location
Canada | 223 |
Turkey | 221 |
Australia | 155 |
Germany | 114 |
United States | 97 |
Florida | 86 |
China | 84 |
Taiwan | 75 |
Indonesia | 73 |
United Kingdom | 70 |
Netherlands | 64 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Meets WWC Standards without Reservations | 4 |
Meets WWC Standards with or without Reservations | 4 |
Does not meet standards | 1 |
Leighton, Elizabeth A. – ProQuest LLC, 2022
The use of unidimensional scales that contain both positively and negatively worded items is common in both the educational and psychological fields. However, dimensionality investigations of these instruments often lead to a rejection of the theorized unidimensional model in favor of multidimensional structures, leaving researchers at odds for…
Descriptors: Test Items, Language Usage, Models, Statistical Analysis
Patrik Havan; Michal Kohút; Peter Halama – International Journal of Testing, 2025
Acquiescence is the tendency of participants to shift their responses to agreement. Lechner et al. (2019) introduced the following mechanisms of acquiescence: social deference and cognitive processing. We added their interaction into a theoretical framework. The sample consists of 557 participants. We found significant medium strong relationship…
Descriptors: Cognitive Processes, Attention, Difficulty Level, Reflection
Aiman Mohammad Freihat; Omar Saleh Bani Yassin – Educational Process: International Journal, 2025
Background/purpose: This study aimed to reveal the accuracy of estimation of multiple-choice test items parameters following the models of the item-response theory in measurement. Materials/methods: The researchers depended on the measurement accuracy indicators, which express the absolute difference between the estimated and actual values of the…
Descriptors: Accuracy, Computation, Multiple Choice Tests, Test Items
Sherwin E. Balbuena – Online Submission, 2024
This study introduces a new chi-square test statistic for testing the equality of response frequencies among distracters in multiple-choice tests. The formula uses the information from the number of correct answers and wrong answers, which becomes the basis of calculating the expected values of response frequencies per distracter. The method was…
Descriptors: Multiple Choice Tests, Statistics, Test Validity, Testing
A Method for Generating Course Test Questions Based on Natural Language Processing and Deep Learning
Hei-Chia Wang; Yu-Hung Chiang; I-Fan Chen – Education and Information Technologies, 2024
Assessment is viewed as an important means to understand learners' performance in the learning process. A good assessment method is based on high-quality examination questions. However, generating high-quality examination questions manually by teachers is a time-consuming task, and it is not easy for students to obtain question banks. To solve…
Descriptors: Natural Language Processing, Test Construction, Test Items, Models
Belzak, William C. M. – Educational Measurement: Issues and Practice, 2023
Test developers and psychometricians have historically examined measurement bias and differential item functioning (DIF) across a single categorical variable (e.g., gender), independently of other variables (e.g., race, age, etc.). This is problematic when more complex forms of measurement bias may adversely affect test responses and, ultimately,…
Descriptors: Test Bias, High Stakes Tests, Artificial Intelligence, Test Items
Metsämuuronen, Jari – Practical Assessment, Research & Evaluation, 2023
Traditional estimators of reliability such as coefficients alpha, theta, omega, and rho (maximal reliability) are prone to give radical underestimates of reliability for the tests common when testing educational achievement. These tests are often structured by widely deviating item difficulties. This is a typical pattern where the traditional…
Descriptors: Test Reliability, Achievement Tests, Computation, Test Items
Huang, Sijia; Luo, Jinwen; Cai, Li – Educational and Psychological Measurement, 2023
Random item effects item response theory (IRT) models, which treat both person and item effects as random, have received much attention for more than a decade. The random item effects approach has several advantages in many practical settings. The present study introduced an explanatory multidimensional random item effects rating scale model. The…
Descriptors: Rating Scales, Item Response Theory, Models, Test Items
He, Dan – ProQuest LLC, 2023
This dissertation examines the effectiveness of machine learning algorithms and feature engineering techniques for analyzing process data and predicting test performance. The study compares three classification approaches and identifies item-specific process features that are highly predictive of student performance. The findings suggest that…
Descriptors: Artificial Intelligence, Data Analysis, Algorithms, Classification
Kylie Gorney; Sandip Sinharay – Educational and Psychological Measurement, 2025
Test-takers, policymakers, teachers, and institutions are increasingly demanding that testing programs provide more detailed feedback regarding test performance. As a result, there has been a growing interest in the reporting of subscores that potentially provide such detailed feedback. Haberman developed a method based on classical test theory…
Descriptors: Scores, Test Theory, Test Items, Testing
Haokun Liu – International Journal of Multilingualism, 2025
Globally, countries or regions across from east to west like Hong Kong, Macao, Taiwan, Singapore, the United Kingdom, and the United States have incorporated language item questions in their censuses. The assessment of such design advantages and disadvantages is crucial for academic investigation. Despite ongoing discussions, there is a noticeable…
Descriptors: Language Usage, Demography, Surveys, Questionnaires
Jiawei Xiong; George Engelhard; Allan S. Cohen – Measurement: Interdisciplinary Research and Perspectives, 2025
It is common to find mixed-format data results from the use of both multiple-choice (MC) and constructed-response (CR) questions on assessments. Dealing with these mixed response types involves understanding what the assessment is measuring, and the use of suitable measurement models to estimate latent abilities. Past research in educational…
Descriptors: Responses, Test Items, Test Format, Grade 8
Kaja Haugen; Cecilie Hamnes Carlsen; Christine Möller-Omrani – Language Awareness, 2025
This article presents the process of constructing and validating a test of metalinguistic awareness (MLA) for young school children (age 8-10). The test was developed between 2021 and 2023 as part of the MetaLearn research project, financed by The Research Council of Norway. The research team defines MLA as using metalinguistic knowledge at a…
Descriptors: Language Tests, Test Construction, Elementary School Students, Metalinguistics
Harold Doran; Testsuhiro Yamada; Ted Diaz; Emre Gonulates; Vanessa Culver – Journal of Educational Measurement, 2025
Computer adaptive testing (CAT) is an increasingly common mode of test administration offering improved test security, better measurement precision, and the potential for shorter testing experiences. This article presents a new item selection algorithm based on a generalized objective function to support multiple types of testing conditions and…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Algorithms
Zopluoglu, Cengiz; Kasli, Murat; Toton, Sarah L. – Educational Measurement: Issues and Practice, 2021
Response time information has recently attracted significant attention in the literature as it may provide meaningful information about item preknowledge. The methods that use response time information to identify examinees with potential item preknowledge make an implicit assumption that the examinees with item preknowledge differ in their…
Descriptors: Reaction Time, Cheating, Test Items