Publication Date
| In 2026 | 0 |
| Since 2025 | 197 |
| Since 2022 (last 5 years) | 1067 |
| Since 2017 (last 10 years) | 2577 |
| Since 2007 (last 20 years) | 4938 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 653 |
| Teachers | 563 |
| Researchers | 250 |
| Students | 201 |
| Administrators | 81 |
| Policymakers | 22 |
| Parents | 17 |
| Counselors | 8 |
| Community | 7 |
| Support Staff | 3 |
| Media Staff | 1 |
| More ▼ | |
Location
| Turkey | 225 |
| Canada | 223 |
| Australia | 155 |
| Germany | 116 |
| United States | 99 |
| China | 90 |
| Florida | 86 |
| Indonesia | 82 |
| Taiwan | 78 |
| United Kingdom | 73 |
| California | 65 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 4 |
| Meets WWC Standards with or without Reservations | 4 |
| Does not meet standards | 1 |
van Velzen, Joke H. – International Journal of Science and Mathematics Education, 2016
Theoretically, it has been argued that a conscious understanding of metacognitive knowledge requires that this knowledge is explicit and systematic. The purpose of this descriptive study was to obtain a better understanding of explicitness and systematicity in knowledge of the mathematical problem-solving process. Eighteen 11th-grade…
Descriptors: Grade 11, High School Students, Mathematics, Mathematics Achievement
Wilhelm, Anne Garrison; Andrews-Larson, Christine – AERA Open, 2016
This study examined sources of inconsistency between teachers' and researchers' interpretations of survey items. We analyzed cognitive interview data from 12 middle school mathematics teachers to understand their interpretations of survey items focused on one aspect of their practice: the content of their advice-seeking interactions. Through this…
Descriptors: Middle School Teachers, Misconceptions, Teacher Surveys, Researchers
Reardon, Sean; Fahle, Erin; Kalogrides, Demetra; Podolsky, Anne; Zarate, Rosalia – Society for Research on Educational Effectiveness, 2016
Prior research demonstrates the existence of gender achievement gaps and the variation in the magnitude of these gaps across states. This paper characterizes the extent to which the variation of gender achievement gaps on standardized tests across the United States can be explained by differing state accountability test formats. A comprehensive…
Descriptors: Test Format, Gender Differences, Achievement Gap, Standardized Tests
Crabtree, Ashleigh R. – ProQuest LLC, 2016
The purpose of this research is to provide information about the psychometric properties of technology-enhanced (TE) items and the effects these items have on the content validity of an assessment. Specifically, this research investigated the impact that the inclusion of TE items has on the construct of a mathematics test, the technical properties…
Descriptors: Psychometrics, Computer Assisted Testing, Test Items, Test Format
Naumann, Alexander; Hochweber, Jan; Hartig, Johannes – Journal of Educational Measurement, 2014
Students' performance in assessments is commonly attributed to more or less effective teaching. This implies that students' responses are significantly affected by instruction. However, the assumption that outcome measures indeed are instructionally sensitive is scarcely investigated empirically. In the present study, we propose a…
Descriptors: Test Bias, Longitudinal Studies, Hierarchical Linear Modeling, Test Items
Wells, Craig S.; Hambleton, Ronald K.; Kirkpatrick, Robert; Meng, Yu – Applied Measurement in Education, 2014
The purpose of the present study was to develop and evaluate two procedures flagging consequential item parameter drift (IPD) in an operational testing program. The first procedure was based on flagging items that exhibit a meaningful magnitude of IPD using a critical value that was defined to represent barely tolerable IPD. The second procedure…
Descriptors: Test Items, Test Bias, Equated Scores, Item Response Theory
Wolkowitz, Amanda; Davis-Becker, Susan – Practical Assessment, Research & Evaluation, 2015
This study evaluates the impact of common item characteristics on the outcome of equating in credentialing examinations when traditionally recommended representation is not possible. This research used real data sets from several credentialing exams to test the impact of content representation, item statistics, and number of common items on…
Descriptors: Test Items, Equated Scores, Licensing Examinations (Professions), Test Content
Jaikaran-Doe, Seeta; Doe, Peter Edward – Australian Educational Computing, 2015
A number of validated survey instruments for assessing technological pedagogical content knowledge (TPACK) do not accurately discriminate between the seven elements of the TPACK framework particularly technological content knowledge (TCK) and technological pedagogical knowledge (TPK). By posing simple questions that assess technological,…
Descriptors: Technological Literacy, Pedagogical Content Knowledge, Surveys, Evaluation Methods
Frick, Hannah; Strobl, Carolin; Zeileis, Achim – Educational and Psychological Measurement, 2015
Rasch mixture models can be a useful tool when checking the assumption of measurement invariance for a single Rasch model. They provide advantages compared to manifest differential item functioning (DIF) tests when the DIF groups are only weakly correlated with the manifest covariates available. Unlike in single Rasch models, estimation of Rasch…
Descriptors: Item Response Theory, Test Bias, Comparative Analysis, Scores
Ranger, Jochen; Kuhn, Jörg-Tobias – Journal of Educational and Behavioral Statistics, 2015
In this article, a latent trait model is proposed for the response times in psychological tests. The latent trait model is based on the linear transformation model and subsumes popular models from survival analysis, like the proportional hazards model and the proportional odds model. Core of the model is the assumption that an unspecified monotone…
Descriptors: Psychological Testing, Reaction Time, Statistical Analysis, Models
Longford, Nicholas T. – Journal of Educational and Behavioral Statistics, 2015
An equating procedure for a testing program with evolving distribution of examinee profiles is developed. No anchor is available because the original scoring scheme was based on expert judgment of the item difficulties. Pairs of examinees from two administrations are formed by matching on coarsened propensity scores derived from a set of…
Descriptors: Equated Scores, Testing Programs, College Entrance Examinations, Scoring
Jonick, Christine; Schneider, Jennifer; Boylan, Daniel – Accounting Education, 2017
The purpose of the research is to examine the effect of different response formats on student performance on introductory accounting exam questions. The study analyzes 1104 accounting students' responses to quantitative questions presented in two formats: multiple-choice and fill-in. Findings indicate that response format impacts student…
Descriptors: Introductory Courses, Accounting, Test Format, Multiple Choice Tests
Bendulo, Hermabeth O.; Tibus, Erlinda D.; Bande, Rhodora A.; Oyzon, Voltaire Q.; Milla, Norberto E.; Macalinao, Myrna L. – International Journal of Evaluation and Research in Education, 2017
Testing or evaluation in an educational context is primarily used to measure or evaluate and authenticate the academic readiness, learning advancement, acquisition of skills, or instructional needs of learners. This study tried to determine whether the varied combinations of arrangements of options and letter cases in a Multiple-Choice Test (MCT)…
Descriptors: Test Format, Multiple Choice Tests, Test Construction, Eye Movements
Sengul Avsar, Asiye; Tavsancil, Ezel – Educational Sciences: Theory and Practice, 2017
This study analysed polytomous items' psychometric properties according to nonparametric item response theory (NIRT) models. Thus, simulated datasets--three different test lengths (10, 20 and 30 items), three sample distributions (normal, right and left skewed) and three samples sizes (100, 250 and 500)--were generated by conducting 20…
Descriptors: Test Items, Psychometrics, Nonparametric Statistics, Item Response Theory
Lu, Ying – ETS Research Report Series, 2017
For standard- or criterion-based assessments, the use of cut scores to indicate mastery, nonmastery, or different levels of skill mastery is very common. As part of performance summary, it is of interest to examine the percentage of examinees at or above the cut scores (PAC) and how PAC evolves across administrations. This paper shows that…
Descriptors: Cutting Scores, Evaluation Methods, Mastery Learning, Performance Based Assessment

Peer reviewed
Direct link
