NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Educational Measurement:…94
What Works Clearinghouse Rating
Showing 1 to 15 of 94 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Buzick, Heather M.; Casabianca, Jodi M.; Gholson, Melissa L. – Educational Measurement: Issues and Practice, 2023
The article describes practical suggestions for measurement researchers and psychometricians to respond to calls for social responsibility in assessment. The underlying assumption is that personalizing large-scale assessment improves the chances that assessment and the use of test scores will contribute to equity in education. This article…
Descriptors: Achievement Tests, Individualized Instruction, Evaluation Methods, Equal Education
Peer reviewed Peer reviewed
Direct linkDirect link
Jung Yeon Park; Sean Joo; Zikun Li; Hyejin Yoon – Educational Measurement: Issues and Practice, 2025
This study examines potential assessment bias based on students' primary language status in PISA 2018. Specifically, multilingual (MLs) and nonmultilingual (non-MLs) students in the United States are compared with regard to their response time as well as scored responses across three cognitive domains (reading, mathematics, and science).…
Descriptors: Achievement Tests, Secondary School Students, International Assessment, Test Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Xiangyi Liao; Daniel M Bolt – Educational Measurement: Issues and Practice, 2024
Traditional approaches to the modeling of multiple-choice item response data (e.g., 3PL, 4PL models) emphasize slips and guesses as random events. In this paper, an item response model is presented that characterizes both disjunctively interacting guessing and conjunctively interacting slipping processes as proficiency-related phenomena. We show…
Descriptors: Item Response Theory, Test Items, Error Correction, Guessing (Tests)
Peer reviewed Peer reviewed
Direct linkDirect link
Dihao Leng; Ummugul Bezirhan; Lale Khorramdel; Bethany Fishbein; Matthias von Davier – Educational Measurement: Issues and Practice, 2024
This study capitalizes on response and process data from the computer-based TIMSS 2019 Problem Solving and Inquiry tasks to investigate gender differences in test-taking behaviors and their association with mathematics achievement at the eighth grade. Specifically, a recently proposed hierarchical speed-accuracy-revisits (SAR) model was adapted to…
Descriptors: Gender Differences, Test Wiseness, Achievement Tests, Mathematics Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Ulitzsch, Esther; Domingue, Benjamin W.; Kapoor, Radhika; Kanopka, Klint; Rios, Joseph A. – Educational Measurement: Issues and Practice, 2023
Common response-time-based approaches for non-effortful response behavior (NRB) in educational achievement tests filter responses that are associated with response times below some threshold. These approaches are, however, limited in that they require a binary decision on whether a response is classified as stemming from NRB; thus ignoring…
Descriptors: Reaction Time, Responses, Behavior, Achievement Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Ulitzsch, Esther; Lüdtke, Oliver; Robitzsch, Alexander – Educational Measurement: Issues and Practice, 2023
Country differences in response styles (RS) may jeopardize cross-country comparability of Likert-type scales. When adjusting for rather than investigating RS is the primary goal, it seems advantageous to impose minimal assumptions on RS structures and leverage information from multiple scales for RS measurement. Using PISA 2015 background…
Descriptors: Response Style (Tests), Comparative Analysis, Achievement Tests, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Ji, Xuejun Ryan; Wu, Amery D. – Educational Measurement: Issues and Practice, 2023
The Cross-Classified Mixed Effects Model (CCMEM) has been demonstrated to be a flexible framework for evaluating reliability by measurement specialists. Reliability can be estimated based on the variance components of the test scores. Built upon their accomplishment, this study extends the CCMEM to be used for evaluating validity evidence.…
Descriptors: Measurement, Validity, Reliability, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Pepper, David – Educational Measurement: Issues and Practice, 2020
The Standards for Educational and Psychological Testing identify several strands of validity evidence that may be needed as support for particular interpretations and uses of assessments. Yet assessment validation often does not seem guided by these Standards, with validations lacking a particular strand even when it appears relevant to an…
Descriptors: Validity, Foreign Countries, Achievement Tests, International Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
König, Christoph; Khorramdel, Lale; Yamamoto, Kentaro; Frey, Andreas – Educational Measurement: Issues and Practice, 2021
Large-scale assessments such as the Programme for International Student Assessment (PISA) have field trials where new survey features are tested for utility in the main survey. Because of resource constraints, there is a trade-off between how much of the sample can be used to test new survey features and how much can be used for the initial item…
Descriptors: Achievement Tests, Foreign Countries, Secondary School Students, International Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Joo, Seang-Hwane; Khorramdel, Lale; Yamamoto, Kentaro; Shin, Hyo Jeong; Robin, Frederic – Educational Measurement: Issues and Practice, 2021
In Programme for International Student Assessment (PISA), item response theory (IRT) scaling is used to examine the psychometric properties of items and scales and to provide comparable test scores across participating countries and over time. To balance the comparability of IRT item parameter estimations across countries with the best possible…
Descriptors: Foreign Countries, International Assessment, Achievement Tests, Secondary School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Rios, Joseph A.; Ihlenfeldt, Samuel D. – Educational Measurement: Issues and Practice, 2021
This study sought to investigate how states communicate results for academic achievement and English language proficiency (ELP) assessments to parents who are English learners (EL). This objective was addressed by evaluating: (a) whether score reports and interpretive guides for state academic achievement and ELP assessments in each state were…
Descriptors: Parents, English Language Learners, Communication (Thought Transfer), Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Traynor, A.; Merzdorf, H. E. – Educational Measurement: Issues and Practice, 2018
During the development of large-scale curricular achievement tests, recruited panels of independent subject-matter experts use systematic judgmental methods--often collectively labeled "alignment" methods--to rate the correspondence between a given test's items and the objective statements in a particular curricular standards document.…
Descriptors: Achievement Tests, Expertise, Alignment (Education), Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Vijver, Fons J. R. – Educational Measurement: Issues and Practice, 2018
A conceptual framework of measurement bias in cross-cultural comparisons, distinguishing between construct, method, and item bias (differential item functioning), is used to describe a methodological framework addressing assessment of noncognitive variables in international large-scale studies. It is argued that the treatment of bias, coming from…
Descriptors: Educational Assessment, Achievement Tests, Foreign Countries, International Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Kroehne, Ulf; Buerger, Sarah; Hahnel, Carolin; Goldhammer, Frank – Educational Measurement: Issues and Practice, 2019
For many years, reading comprehension in the Programme for International Student Assessment (PISA) was measured via paper-based assessment (PBA). In the 2015 cycle, computer-based assessment (CBA) was introduced, raising the question of whether central equivalence criteria required for a valid interpretation of the results are fulfilled. As an…
Descriptors: Reading Comprehension, Computer Assisted Testing, Achievement Tests, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Welch, Catherine J.; Dunbar, Stephen B. – Educational Measurement: Issues and Practice, 2020
The use of assessment results to inform school accountability relies on the assumption that the test design appropriately represents the content and cognitive emphasis reflected in the state's standards. Since the passage of the Every Student Succeeds Act and the certification of accountability assessments through federal peer review practices,…
Descriptors: Accountability, Test Construction, State Standards, Content Validity
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7