Publication Date
| In 2026 | 0 |
| Since 2025 | 200 |
| Since 2022 (last 5 years) | 1070 |
| Since 2017 (last 10 years) | 2580 |
| Since 2007 (last 20 years) | 4941 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 653 |
| Teachers | 563 |
| Researchers | 250 |
| Students | 201 |
| Administrators | 81 |
| Policymakers | 22 |
| Parents | 17 |
| Counselors | 8 |
| Community | 7 |
| Support Staff | 3 |
| Media Staff | 1 |
| More ▼ | |
Location
| Turkey | 225 |
| Canada | 223 |
| Australia | 155 |
| Germany | 116 |
| United States | 99 |
| China | 90 |
| Florida | 86 |
| Indonesia | 82 |
| Taiwan | 78 |
| United Kingdom | 73 |
| California | 65 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 4 |
| Meets WWC Standards with or without Reservations | 4 |
| Does not meet standards | 1 |
Papaieronymou, Irini – Athens Journal of Education, 2017
This paper presents the results of a study which examined the role of particular tasks implemented through two instructional methods on college students' achievement in probability. A mixed methods design that utilized a pre-test (with multiple-choice items) and post-test (with multiple-choice and open-ended items) in treatment and control groups…
Descriptors: Teaching Methods, College Students, Academic Achievement, Probability
Mullis, Ina V. S., Ed.; Martin, Michael O., Ed. – International Association for the Evaluation of Educational Achievement, 2017
The "TIMSS 2019 Assessment Frameworks" provides the foundation for the four international assessments that comprise the International Association for the Evaluation of Educational Achievement (IEA's) TIMSS (Trends in International Mathematics and Science Study) 2019: TIMSS Mathematics--Fourth Grade, TIMSS Mathematics--Eighth Grade, TIMSS…
Descriptors: Achievement Tests, Elementary Secondary Education, Foreign Countries, International Assessment
Çetinavci, Ugur Recep; Öztürk, Ismet – Online Submission, 2017
Pragmatic competence is among the explicitly acknowledged sub-competences that make the communicative competence in any language (Bachman & Palmer, 1996; Council of Europe, 2001). Within the notion of pragmatic competence itself, "implicature (implied meanings)" comes to the fore as one of the five main areas there (Levinson, 1983).…
Descriptors: Test Construction, Computer Assisted Testing, Communicative Competence (Languages), Second Language Instruction
National Assessment Governing Board, 2017
The National Assessment of Educational Progress (NAEP) is the only continuing and nationally representative measure of trends in academic achievement of U.S. elementary and secondary school students in various subjects. For more than four decades, NAEP assessments have been conducted periodically in reading, mathematics, science, writing, U.S.…
Descriptors: Mathematics Achievement, Multiple Choice Tests, National Competency Tests, Educational Trends
Dorans, Neil J. – ETS Research Report Series, 2013
Quantitative fairness procedures have been developed and modified by ETS staff over the past several decades. ETS has been a leader in fairness assessment, and its efforts are reviewed in this report. The first section deals with differential prediction and differential validity procedures that examine whether test scores predict a criterion, such…
Descriptors: Test Bias, Statistical Analysis, Test Validity, Scores
Vedul-Kjelsås, Vigdis; Stensdotter, Ann-Katrin; Sigmundsson, Hermundur – Scandinavian Journal of Educational Research, 2013
By using the Movement Assessment Battery (MABC), the present study investigated possible gender differences in several tasks of motor competence in children. The sample included 67 Norwegian sixth-grade children (Girls N?=?29; Boys?=?39). Boys' performance exceeds that of girls in ball skills and in one of the balance skills. No differences were…
Descriptors: Foreign Countries, Gender Differences, Physical Activities, Psychomotor Skills
Parish, Jane A.; Karisch, Brandi B. – Journal of Extension, 2013
Item analysis can serve as a useful tool in improving multiple-choice questions used in Extension programming. It can identify gaps between instruction and assessment. An item analysis of Mississippi Master Cattle Producer program multiple-choice examination responses was performed to determine the difficulty of individual examinations, assess the…
Descriptors: Test Items, Item Analysis, Multiple Choice Tests, Extension Education
Mao, Xiuzhen; Xin, Tao – Applied Psychological Measurement, 2013
The Monte Carlo approach which has previously been implemented in traditional computerized adaptive testing (CAT) is applied here to cognitive diagnostic CAT to test the ability of this approach to address multiple content constraints. The performance of the Monte Carlo approach is compared with the performance of the modified maximum global…
Descriptors: Monte Carlo Methods, Cognitive Tests, Diagnostic Tests, Computer Assisted Testing
Hsu, Chia-Ling; Wang, Wen-Chung; Chen, Shu-Ying – Applied Psychological Measurement, 2013
Interest in developing computerized adaptive testing (CAT) under cognitive diagnosis models (CDMs) has increased recently. CAT algorithms that use a fixed-length termination rule frequently lead to different degrees of measurement precision for different examinees. Fixed precision, in which the examinees receive the same degree of measurement…
Descriptors: Computer Assisted Testing, Adaptive Testing, Cognitive Tests, Diagnostic Tests
Sun, Jianan; Xin, Tao; Zhang, Shumei; de la Torre, Jimmy – Applied Psychological Measurement, 2013
This article proposes a generalized distance discriminating method for test with polytomous response (GDD-P). The new method is the polytomous extension of an item response theory (IRT)-based cognitive diagnostic method, which can identify examinees' ideal response patterns (IRPs) based on a generalized distance index. The similarities between…
Descriptors: Item Response Theory, Cognitive Tests, Diagnostic Tests, Matrices
Hansen, Mary A.; Lyon, Steven R.; Heh, Peter; Zigmond, Naomi – Applied Measurement in Education, 2013
Large-scale assessment programs, including alternate assessments based on alternate achievement standards (AA-AAS), must provide evidence of technical quality and validity. This study provides information about the technical quality of one AA-AAS by evaluating the standard setting for the science component. The assessment was designed to have…
Descriptors: Alternative Assessment, Science Tests, Standard Setting, Test Validity
Svetina, Dubravka – Educational and Psychological Measurement, 2013
The purpose of this study was to investigate the effect of complex structure on dimensionality assessment in noncompensatory multidimensional item response models using dimensionality assessment procedures based on DETECT (dimensionality evaluation to enumerate contributing traits) and NOHARM (normal ogive harmonic analysis robust method). Five…
Descriptors: Item Response Theory, Statistical Analysis, Computation, Test Length
Maxwell, Lesli A. – Education Week, 2013
As test designers work to craft the new, common assessments set to debut in most of the nation's public schools in the 2014-15 school year, their goal is to provide all English-language learners (ELLs), regardless of their language-proficiency levels, the same opportunities to demonstrate their content knowledge and skills as their peers who are…
Descriptors: Testing Accommodations, English Language Learners, Educational Assessment, Consortia
Runnels, Judith – Language Testing in Asia, 2013
Differential item functioning (DIF) is when a test item favors or hinders a characteristic exhibited by group members of a test-taking population. DIF analyses are statistical procedures used to determine to what extent the content of an item affects the item endorsement of sub-groups of test-takers. If DIF is found for many items on the test, the…
Descriptors: Test Items, Test Bias, Item Response Theory, College Freshmen
Xiang, Rui – ProQuest LLC, 2013
A key issue of cognitive diagnostic models (CDMs) is the correct identification of Q-matrix which indicates the relationship between attributes and test items. Previous CDMs typically assumed a known Q-matrix provided by domain experts such as those who developed the questions. However, misspecifications of Q-matrix had been discovered in the past…
Descriptors: Diagnostic Tests, Cognitive Processes, Matrices, Test Items

Peer reviewed
Direct link
