NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 9 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Oliveri, Maria Elena; Lawless, Rene; Robin, Frederic; Bridgeman, Brent – Applied Measurement in Education, 2018
We analyzed a pool of items from an admissions test for differential item functioning (DIF) for groups based on age, socioeconomic status, citizenship, or English language status using Mantel-Haenszel and item response theory. DIF items were systematically examined to identify its possible sources by item type, content, and wording. DIF was…
Descriptors: Test Bias, Comparative Analysis, Item Banks, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Herde, Christoph Nils; Wüstenberg, Sascha; Greiff, Samuel – Applied Measurement in Education, 2016
Complex Problem Solving (CPS) is seen as a cross-curricular 21st century skill that has attracted interest in large-scale-assessments. In the Programme for International Student Assessment (PISA) 2012, CPS was assessed all over the world to gain information on students' skills to acquire and apply knowledge while dealing with nontransparent…
Descriptors: Problem Solving, Achievement Tests, Foreign Countries, International Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Ou Lydia; Wilson, Mark – Applied Measurement in Education, 2009
Many efforts have been made to determine and explain differential gender performance on large-scale mathematics assessments. A well-agreed-on conclusion is that gender differences are contextualized and vary across math domains. This study investigated the pattern of gender differences by item domain (e.g., Space and Shape, Quantity) and item type…
Descriptors: Gender Differences, Mathematics Tests, Measurement, Test Format
Peer reviewed Peer reviewed
Vispoel, Walter P.; And Others – Applied Measurement in Education, 1994
Vocabulary fixed-item (FIT), computerized-adaptive (CAT), and self-adapted (SAT) tests were compared with 121 college students. CAT was more precise and efficient than SAT, which was more precise and efficient than FIT. SAT also yielded higher ability estimates for individuals with lower verbal self-concepts. (SLD)
Descriptors: Ability, Adaptive Testing, College Students, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Penfield, Randall D. – Applied Measurement in Education, 2006
This study applied the maximum expected information (MEI) and the maximum posterior-weighted information (MPI) approaches of computer adaptive testing item selection to the case of a test using polytomous items following the partial credit model. The MEI and MPI approaches are described. A simulation study compared the efficiency of ability…
Descriptors: Bayesian Statistics, Adaptive Testing, Computer Assisted Testing, Test Items
Peer reviewed Peer reviewed
Barnes, Laura L. B.; Wise, Steven L. – Applied Measurement in Education, 1991
One-parameter and three-parameter item response theory (IRT) model estimates were compared with estimates obtained from two modified one-parameter models that incorporated a constant nonzero guessing parameter. Using small-sample simulation data (50, 100, and 200 simulated examinees), modified 1-parameter models were most effective in estimating…
Descriptors: Ability, Achievement Tests, Comparative Analysis, Computer Simulation
Peer reviewed Peer reviewed
Ponsoda, Vicente; Olea, Julio; Rodriguez, Maria Soledad; Revuelta, Javier – Applied Measurement in Education, 1999
Compared easy and difficult versions of self-adapted tests (SAT) and computerized adapted tests. No significant differences were found among the tests for estimated ability or posttest state anxiety in studies with 187 Spanish high school students, although other significant differences were found. Discusses implications for interpreting test…
Descriptors: Ability, Adaptive Testing, Comparative Analysis, Computer Assisted Testing
Peer reviewed Peer reviewed
Martinez, Michael E. – Applied Measurement in Education, 1993
Figural response (FR) items in architecture were compared with multiple-choice (MC) counterparts for their ability to predict architectural problem-solving proficiency of 33 practicing architects, 34 architecture interns, and 53 architecture students. Although both FR and MC predicted verbal design problem solving, only FR scores predicted…
Descriptors: Architects, Architectural Drafting, College Students, Comparative Analysis
Peer reviewed Peer reviewed
Miller, Timothy R.; Hirsch, Thomas M. – Applied Measurement in Education, 1992
A procedure for interpreting multiple-discrimination indices from a multidimensional item-response theory analysis is described and demonstrated with responses of 1,635 high school students to a multiple-choice test. The procedure consists of converting discrimination parameter estimates to direction cosines and analyzing the angular distances…
Descriptors: Ability, Cluster Analysis, Comparative Analysis, Estimation (Mathematics)