NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Applied Measurement in…21
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 21 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Yi-Hsin Chen – Applied Measurement in Education, 2024
This study aims to apply the differential item functioning (DIF) technique with the deterministic inputs, noisy "and" gate (DINA) model to validate the mathematics construct and diagnostic attribute profiles across American and Singaporean students. Even with the same ability level, every single item is expected to show uniform DIF…
Descriptors: Foreign Countries, Achievement Tests, Elementary Secondary Education, International Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Pools, Elodie – Applied Measurement in Education, 2022
Many low-stakes assessments, such as international large-scale surveys, are administered during time-limited testing sessions and some test-takers are not able to endorse the last items of the test, resulting in not-reached (NR) items. However, because the test has no consequence for the respondents, these NR items can also stem from quitting the…
Descriptors: Achievement Tests, Foreign Countries, International Assessment, Secondary School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, HyeSun; Smith, Weldon; Martinez, Angel; Ferris, Heather; Bova, Joe – Applied Measurement in Education, 2021
The aim of the current research was to provide recommendations to facilitate the development and use of anchoring vignettes (AVs) for cross-cultural comparisons in education. Study 1 identified six factors leading to order violations and ties in AV responses based on cognitive interviews with 15-year-old students. The factors were categorized into…
Descriptors: Vignettes, Test Items, Equated Scores, Nonparametric Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Rios, Joseph A.; Guo, Hongwen – Applied Measurement in Education, 2020
The objective of this study was to evaluate whether differential noneffortful responding (identified via response latencies) was present in four countries administered a low-stakes college-level critical thinking assessment. Results indicated significant differences (as large as 0.90 "SD") between nearly all country pairings in the…
Descriptors: Response Style (Tests), Cultural Differences, Critical Thinking, Cognitive Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Soland, James; Wise, Steven L.; Gao, Lingyun – Applied Measurement in Education, 2019
Disengaged responding is a phenomenon that often biases observed scores from achievement tests and surveys in practically and statistically significant ways. This problem has led to the development of methods to detect and correct for disengaged responses on both achievement test and survey scores. One major disadvantage when trying to detect…
Descriptors: Reaction Time, Metadata, Response Style (Tests), Student Surveys
Peer reviewed Peer reviewed
Direct linkDirect link
El Masri, Yasmine H.; Andrich, David – Applied Measurement in Education, 2020
In large-scale educational assessments, it is generally required that tests are composed of items that function invariantly across the groups to be compared. Despite efforts to ensure invariance in the item construction phase, for a range of reasons (including the security of items) it is often necessary to account for differential item…
Descriptors: Models, Goodness of Fit, Test Validity, Achievement Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Abulela, Mohammed A. A.; Rios, Joseph A. – Applied Measurement in Education, 2022
When there are no personal consequences associated with test performance for examinees, rapid guessing (RG) is a concern and can differ between subgroups. To date, the impact of differential RG on item-level measurement invariance has received minimal attention. To that end, a simulation study was conducted to examine the robustness of the…
Descriptors: Comparative Analysis, Robustness (Statistics), Nonparametric Statistics, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Sachse, Karoline A.; Haag, Nicole – Applied Measurement in Education, 2017
Standard errors computed according to the operational practices of international large-scale assessment studies such as the Programme for International Student Assessment's (PISA) or the Trends in International Mathematics and Science Study (TIMSS) may be biased when cross-national differential item functioning (DIF) and item parameter drift are…
Descriptors: Error of Measurement, Test Bias, International Assessment, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Rutkowski, Leslie; Svetina, Dubravka – Applied Measurement in Education, 2017
In spite of the challenges inherent in making dozens of comparisons across heterogeneous populations, a relatively recent interest in scale-score equivalence for non-achievement measures in an international context has emerged. Until recently, operational procedures for establishing measurement invariance using multiple-groups analyses were…
Descriptors: International Assessment, Goodness of Fit, Statistical Analysis, Teacher Surveys
Peer reviewed Peer reviewed
Direct linkDirect link
Oliveri, Maria Elena; Ercikan, Kadriye; Lyons-Thomas, Juliette; Holtzman, Steven – Applied Measurement in Education, 2016
Differential item functioning (DIF) analyses have been used as the primary method in large-scale assessments to examine fairness for subgroups. Currently, DIF analyses are conducted utilizing manifest methods using observed characteristics (gender and race/ethnicity) for grouping examinees. Homogeneity of item responses is assumed denoting that…
Descriptors: Test Bias, Language Minorities, Effect Size, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Wise, Steven L.; Gao, Lingyun – Applied Measurement in Education, 2017
There has been an increased interest in the impact of unmotivated test taking on test performance and score validity. This has led to the development of new ways of measuring test-taking effort based on item response time. In particular, Response Time Effort (RTE) has been shown to provide an assessment of effort down to the level of individual…
Descriptors: Test Bias, Computer Assisted Testing, Item Response Theory, Achievement Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, HyeSun – Applied Measurement in Education, 2018
The current simulation study examined the effects of Item Parameter Drift (IPD) occurring in a short scale on parameter estimates in multilevel models where scores from a scale were employed as a time-varying predictor to account for outcome scores. Five factors, including three decisions about IPD, were considered for simulation conditions. It…
Descriptors: Test Items, Hierarchical Linear Modeling, Predictor Variables, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Grover, Raman K.; Ercikan, Kadriye – Applied Measurement in Education, 2017
In gender differential item functioning (DIF) research it is assumed that all members of a gender group have similar item response patterns and therefore generalizations from group level to subgroup and individual levels can be made accurately. However DIF items do not necessarily disadvantage every member of a gender group to the same degree,…
Descriptors: Gender Differences, Test Bias, Socioeconomic Status, Reading Achievement
Peer reviewed Peer reviewed
Direct linkDirect link
Care, Esther; Scoular, Claire; Griffin, Patrick – Applied Measurement in Education, 2016
The Assessment and Teaching of 21st Century Skills (ATC21S™) project undertook a research and development plan that included conceptualization of 21st century skills and assessment task development. Conceptualization focused on the definition of 21st century skills. This article outlines the particular case of one of these skills, collaborative…
Descriptors: Problem Solving, Critical Thinking, Achievement Tests, International Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Benítez, Isabel; Padilla, José-Luis; Hidalgo Montesinos, María Dolores; Sireci, Stephen G. – Applied Measurement in Education, 2016
Analysis of differential item functioning (DIF) is often used to determine if cross-lingual assessments are equivalent across languages. However, evidence on the causes of cross-lingual DIF is still evasive. Expert appraisal is a qualitative method useful for obtaining detailed information about problematic elements in the different linguistic…
Descriptors: Test Bias, Mixed Methods Research, Questionnaires, International Assessment
Previous Page | Next Page »
Pages: 1  |  2