NotesFAQContact Us
Collection
Advanced
Search Tips
Back to results
Peer reviewed Peer reviewed
Direct linkDirect link
ERIC Number: EJ954596
Record Type: Journal
Publication Date: 2011
Pages: 11
Abstractor: As Provided
ISBN: N/A
ISSN: ISSN-1529-7713
EISSN: N/A
Available Date: N/A
Equating of Multi-Facet Tests across Administrations
Lunz, Mary; Suanthong, Surintorn
Journal of Applied Measurement, v12 n2 p124-134 Sum 2011
The desirability of test equating to maintain the same criterion standard from test administration to test administration has long been accepted for multiple choice tests. The same consistency of expectations is desirable for performance tests, especially if they are part of a licensure or certification process or used for other high stakes decisions (e.g., graduation). Performance tests typically have three or more facets (e.g., examinees, raters, items, and tasks); all of which must be accounted for in the test-equating process. The application of the multi-facet Rasch model (Linacre, 2003a) is essential for equating performance tests because it provides calibrations of the elements of each facet. It also accounts for the differences in the tests taken by each examinee within a test administration. When multi-facet tests are equated across administrations, differences between the benchmark scale and the current test must be accounted for in each facet. Examinee measures are then adjusted for the differences between tests. The examples presented in this article were selected because of their difference in size and complexity of test design. Because they are different, they demonstrate how the same principles of common element test equating can be used regardless of the number of facets included in the test. Performance tests with more than two facets can be equated, as long as appropriate quality control methods are employed. First, use carefully selected common elements for each facet that represent the content and properties of the test. The common elements should be unaltered from their original use. Then, the most effective method is to initially anchor all common elements in each facet, then iteratively unanchor those elements which do not meet the criteria for displacement and fit. Strict criteria for displacement must be used consistently among facets. The suggested criterion for displacement is equal to or less than 0.5 logits. Unanchoring inconsistent and/or misfitting facet elements will improve the quality of the test equating.
JAM Press. P.O. Box 1283, Maple Grove, MN 55311. e-mail: info@jampress.org; Web site: http://www.jampress.org
Publication Type: Journal Articles; Reports - Evaluative
Education Level: N/A
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: N/A
Grant or Contract Numbers: N/A
Author Affiliations: N/A