Publication Date
| In 2026 | 0 |
| Since 2025 | 220 |
| Since 2022 (last 5 years) | 1089 |
| Since 2017 (last 10 years) | 2599 |
| Since 2007 (last 20 years) | 4960 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 653 |
| Teachers | 563 |
| Researchers | 250 |
| Students | 201 |
| Administrators | 81 |
| Policymakers | 22 |
| Parents | 17 |
| Counselors | 8 |
| Community | 7 |
| Support Staff | 3 |
| Media Staff | 1 |
| More ▼ | |
Location
| Turkey | 226 |
| Canada | 223 |
| Australia | 155 |
| Germany | 116 |
| United States | 99 |
| China | 90 |
| Florida | 86 |
| Indonesia | 82 |
| Taiwan | 78 |
| United Kingdom | 73 |
| California | 66 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 4 |
| Meets WWC Standards with or without Reservations | 4 |
| Does not meet standards | 1 |
Antal, Tamás – ETS Research Report Series, 2007
A coordinate-free definition of complex-structure multidimensional item response theory (MIRT) for dichotomously scored items is presented. The point of view taken emphasizes the possibilities and subtleties of understanding MIRT as a multidimensional extension of the classical unidimensional item response theory models. The main theorem of the…
Descriptors: Item Response Theory, Models, Test Items, Computation
Dohn, Nina Bonderup – Journal of Philosophy of Education, 2007
This article gives a critique of the methodology of OECD's Programme for International Student Assessment (PISA). It is argued that PISA is invalidated by the fact that the methodology chosen does not constitute an adequate operationalisation of the question of inquiry. Therefore, contrary to the claims of PISA, PISA is not an assessment of the…
Descriptors: Foreign Countries, Test Items, Student Evaluation, Evaluation Methods
Anderson, Carolyn J.; Yu, Hsiu-Ting – Psychometrika, 2007
Log-multiplicative association (LMA) models, which are special cases of log-linear models, have interpretations in terms of latent continuous variables. Two theoretical derivations of LMA models based on item response theory (IRT) arguments are presented. First, we show that Anderson and colleagues (Anderson & Vermunt, 2000; Anderson & Bockenholt,…
Descriptors: Probability, Item Response Theory, Models, Psychometrics
Graf, Edith Aurora – ETS Research Report Series, 2008
Quantitative item models are item structures that may be expressed in terms of mathematical variables and constraints. An item model may be developed as a computer program from which large numbers of items are automatically generated. Item models can be used to produce large numbers of items for use in traditional, large-scale assessments. But…
Descriptors: Test Items, Models, Diagnostic Tests, Statistical Analysis
National Assessment Governing Board, 2008
The National Assessment of Educational Progress (NAEP) for the arts measures students' knowledge and skills in creating, performing, and responding to works of music, theatre, and visual arts. This framework document asserts that dance, music, theatre and the visual arts are important parts of a full education. When students engage in the arts,…
Descriptors: Art Education, Visual Arts, Music, Theater Arts
Alonzo, Julie; Tindal, Gerald – Behavioral Research and Teaching, 2008
This technical report describes the development of fifth grade progress monitoring measures in the area of Passage Reading Fluency. This measure was designed to target the fluency component of a developmental model of reading. Twenty alternate forms were written by graduate students and reviewed by the lead author. The passages were piloted and…
Descriptors: Reading Fluency, Reading Tests, Grade 5, Test Construction
Derner, Seth; Klein, Steve; Hilber, Don – MPR Associates, Inc., 2008
This report documents strategies that can be used to initiate development of a technical skill test item bank and/or assessment clearinghouse and quantifies the cost of creating and maintaining such a system. It is intended to inform state administrators on the potential uses and benefits of system participation, test developers on the needs and…
Descriptors: Test Items, State Surveys, Clearinghouses, Item Banks
Veldkamp, Bernard P. – International Journal of Testing, 2008
Integrity[TM], an online application for testing both the statistical integrity of the test and the academic integrity of the examinees, was evaluated for this review. Program features and the program output are described. An overview of the statistics in Integrity[TM] is provided, and the application is illustrated with a small simulation study.…
Descriptors: Simulation, Integrity, Statistics, Computer Assisted Testing
Johnstone, Christopher J.; Thompson, Sandra J.; Bottsford-Miller, Nicole A.; Thurlow, Martha L. – Educational Measurement: Issues and Practice, 2008
Test items undergo multiple iterations of review before states and vendors deem them acceptable to be placed in a live statewide assessment. This article reviews three approaches that can add validity evidence to states' item review processes. The first process is a structured sensitivity review process that focuses on universal design…
Descriptors: Test Items, Disabilities, Test Construction, Testing Programs
Zhang, Bo; Walker, Cindy M. – Applied Psychological Measurement, 2008
The purpose of this research was to examine the effects of missing data on person-model fit and person trait estimation in tests with dichotomous items. Under the missing-completely-at-random framework, four missing data treatment techniques were investigated including pairwise deletion, coding missing responses as incorrect, hotdeck imputation,…
Descriptors: Item Response Theory, Computation, Goodness of Fit, Test Items
French, Brian F.; Finch, W. Holmes – Structural Equation Modeling: A Multidisciplinary Journal, 2008
Multigroup confirmatory factor analysis (MCFA) is a popular method for the examination of measurement invariance and specifically, factor invariance. Recent research has begun to focus on using MCFA to detect invariance for test items. MCFA requires certain parameters (e.g., factor loadings) to be constrained for model identification, which are…
Descriptors: Test Items, Simulation, Factor Structure, Factor Analysis
Froelich, Amy G.; Habing, Brian – Applied Psychological Measurement, 2008
DIMTEST is a nonparametric hypothesis-testing procedure designed to test the assumptions of a unidimensional and locally independent item response theory model. Several previous Monte Carlo studies have found that using linear factor analysis to select the assessment subtest for DIMTEST results in a moderate to severe loss of power when the exam…
Descriptors: Test Items, Monte Carlo Methods, Form Classes (Languages), Program Effectiveness
Test-Retest Reliability of a Theory of Mind Task Battery for Children with Autism Spectrum Disorders
Hutchins, Tiffany L.; Prelock, Patricia A.; Chace, Wendy – Focus on Autism and Other Developmental Disabilities, 2008
This study examined for the first time the test-retest reliability of theory-of-mind tasks when administered to children with Autism Spectrum Disorders (ASD). A total of 16 questions within 9 tasks targeting a range of content and complexity were administered at 2 times to 17 children with ASD. In all, 13 questions demonstrated adequate…
Descriptors: Autism, Response Style (Tests), Verbal Ability, Test Reliability
Wu, Pei-Chen; Chang, Lily – Measurement and Evaluation in Counseling and Development, 2008
The authors investigated the Chinese version of the Beck Depression Inventory-II (BDI-II-C; Chinese Behavioral Science Corporation, 2000) within the Rasch framework in terms of dimensionality, item difficulty, and category functioning. Two underlying scale dimensions, relatively high item difficulties, and a need for collapsing 2 response…
Descriptors: Test Items, Foreign Countries, Psychometrics, Behavioral Sciences
Sapriati, Amalia; Zuhairi, Aminudin – Turkish Online Journal of Distance Education, 2010
This paper addresses the use of computer-based testing in distance education, based on the experience of Universitas Terbuka (UT), Indonesia. Computer-based testing has been developed at UT for reasons of meeting the specific needs of distance students as the following: (1) students' inability to sit for the scheduled test; (2) conflicting test…
Descriptors: Alternative Assessment, Distance Education, Computer Assisted Testing, Computer System Design

Peer reviewed
Direct link
