Publication Date
| In 2026 | 0 |
| Since 2025 | 220 |
| Since 2022 (last 5 years) | 1089 |
| Since 2017 (last 10 years) | 2599 |
| Since 2007 (last 20 years) | 4960 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 653 |
| Teachers | 563 |
| Researchers | 250 |
| Students | 201 |
| Administrators | 81 |
| Policymakers | 22 |
| Parents | 17 |
| Counselors | 8 |
| Community | 7 |
| Support Staff | 3 |
| Media Staff | 1 |
| More ▼ | |
Location
| Turkey | 226 |
| Canada | 223 |
| Australia | 155 |
| Germany | 116 |
| United States | 99 |
| China | 90 |
| Florida | 86 |
| Indonesia | 82 |
| Taiwan | 78 |
| United Kingdom | 73 |
| California | 66 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 4 |
| Meets WWC Standards with or without Reservations | 4 |
| Does not meet standards | 1 |
Zijlmans, Eva A. O.; Tijmstra, Jesper; van der Ark, L. Andries; Sijtsma, Klaas – Educational and Psychological Measurement, 2018
Reliability is usually estimated for a total score, but it can also be estimated for item scores. Item-score reliability can be useful to assess the repeatability of an individual item score in a group. Three methods to estimate item-score reliability are discussed, known as method MS, method [lambda][subscript 6], and method CA. The item-score…
Descriptors: Test Items, Test Reliability, Correlation, Comparative Analysis
Luo, Yong – Educational and Psychological Measurement, 2018
Mplus is a powerful latent variable modeling software program that has become an increasingly popular choice for fitting complex item response theory models. In this short note, we demonstrate that the two-parameter logistic testlet model can be estimated as a constrained bifactor model in Mplus with three estimators encompassing limited- and…
Descriptors: Computer Software, Models, Statistical Analysis, Computation
Fay, Derek M.; Levy, Roy; Mehta, Vandhana – Journal of Educational Measurement, 2018
A common practice in educational assessment is to construct multiple forms of an assessment that consists of tasks with similar psychometric properties. This study utilizes a Bayesian multilevel item response model and descriptive graphical representations to evaluate the psychometric similarity of variations of the same task. These approaches for…
Descriptors: Psychometrics, Performance Based Assessment, Bayesian Statistics, Item Response Theory
Embretson, Susan E.; Kingston, Neal M. – Journal of Educational Measurement, 2018
The continual supply of new items is crucial to maintaining quality for many tests. Automatic item generation (AIG) has the potential to rapidly increase the number of items that are available. However, the efficiency of AIG will be mitigated if the generated items must be submitted to traditional, time-consuming review processes. In two studies,…
Descriptors: Mathematics Instruction, Mathematics Achievement, Psychometrics, Test Items
Lu, Ru; Guo, Hongwen – ETS Research Report Series, 2018
In this paper we compare the newly developed pseudo-equivalent groups (PEG) linking method with the linking methods based on the traditional nonequivalent groups with anchor test (NEAT) design and illustrate how to use the PEG methods under imperfect equating conditions. To do this, we proposed a new method that combines the features of PEG…
Descriptors: Equated Scores, Comparative Analysis, Test Items, Background
Cui, Zhongmin; Liu, Chunyan; He, Yong; Chen, Hanwei – Journal of Educational Measurement, 2018
Allowing item review in computerized adaptive testing (CAT) is getting more attention in the educational measurement field as more and more testing programs adopt CAT. The research literature has shown that allowing item review in an educational test could result in more accurate estimates of examinees' abilities. The practice of item review in…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Test Wiseness
Li, Jie; van der Linden, Wim J. – Journal of Educational Measurement, 2018
The final step of the typical process of developing educational and psychological tests is to place the selected test items in a formatted form. The step involves the grouping and ordering of the items to meet a variety of formatting constraints. As this activity tends to be time-intensive, the use of mixed-integer programming (MIP) has been…
Descriptors: Programming, Automation, Test Items, Test Format
Traynor, A.; Merzdorf, H. E. – Educational Measurement: Issues and Practice, 2018
During the development of large-scale curricular achievement tests, recruited panels of independent subject-matter experts use systematic judgmental methods--often collectively labeled "alignment" methods--to rate the correspondence between a given test's items and the objective statements in a particular curricular standards document.…
Descriptors: Achievement Tests, Expertise, Alignment (Education), Test Items
Shu-Fen Lin; Wan-Chin Shie – International Journal of Science and Mathematics Education, 2024
Teachers lack effective curriculum-based instruments to assess their students' scientific competence that would provide information for modifying their inquiry instruction. The main purpose of this study was to develop and validate a Curriculum-Based Scientific Competence (CBSC) test to assess students' scientific competence in a 1-semester Grade…
Descriptors: Science Curriculum, Validity, Grade 9, Science Tests
Marzieh Souzandehfar – International Journal of Language Testing, 2024
This study represents the inaugural attempt at assessing the authenticity of the tasks encompassed in the IELTS Speaking Module. The evaluation is conducted from the vantage points of applied linguistics and general education, and serves to enhance comprehension of authenticity and authentic assessment. In order to achieve this objective, an…
Descriptors: Speech Communication, Thinking Skills, Problem Solving, Applied Linguistics
Sivakorn Tangsakul; Kornwipa Poonpon – rEFLections, 2024
Given the significant global influence of the Common European Framework of Reference for Languages: Teaching, Learning, and Assessment (CEFR) on English language education, this study deals with aligning a university's academic reading tests to the CEFR. It aimed at validating the test construct of the academic reading tests in relation to the…
Descriptors: Alignment (Education), Reading Tests, Second Language Learning, Language Proficiency
Huu Thanh Minh Nguyen; Nguyen Van Anh Le – TESL-EJ, 2024
Comparing language tests and test preparation materials holds important implications for the latter's validity and reliability. However, not enough studies compare such materials across a wide range of indices. Therefore, this study investigated the text complexity of IELTS academic reading tests (IRT) and IELTS reading practice tests (IRPrT).…
Descriptors: Second Language Learning, English (Second Language), Language Tests, Readability
Baskonus, Turan; Soyer, Fikret – International Journal of Psychology and Educational Studies, 2020
The aim of this study is to develop a scale that measures the attitudes of physical education and sports teachers towards measurement and evaluation. In this study, scale development principles and steps of DeVellis (2017) were used. Initially, a literature review was conducted, 19 physical education and sports teachers were interviewed in written…
Descriptors: Test Construction, Teacher Attitudes, Physical Education Teachers, Test Items
Liu, Yuan; Hau, Kit-Tai – Educational and Psychological Measurement, 2020
In large-scale low-stake assessment such as the Programme for International Student Assessment (PISA), students may skip items (missingness) which are within their ability to complete. The detection and taking care of these noneffortful responses, as a measure of test-taking motivation, is an important issue in modern psychometric models.…
Descriptors: Response Style (Tests), Motivation, Test Items, Statistical Analysis
Klugman, Emma M.; Ho, Andrew D. – Educational Measurement: Issues and Practice, 2020
State testing programs regularly release previously administered test items to the public. We provide an open-source recipe for state, district, and school assessment coordinators to combine these items flexibly to produce scores linked to established state score scales. These would enable estimation of student score distributions and achievement…
Descriptors: Testing Programs, State Programs, Test Items, Scores

Peer reviewed
Direct link
