Publication Date
| In 2026 | 0 |
| Since 2025 | 220 |
| Since 2022 (last 5 years) | 1089 |
| Since 2017 (last 10 years) | 2599 |
| Since 2007 (last 20 years) | 4960 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 653 |
| Teachers | 563 |
| Researchers | 250 |
| Students | 201 |
| Administrators | 81 |
| Policymakers | 22 |
| Parents | 17 |
| Counselors | 8 |
| Community | 7 |
| Support Staff | 3 |
| Media Staff | 1 |
| More ▼ | |
Location
| Turkey | 226 |
| Canada | 223 |
| Australia | 155 |
| Germany | 116 |
| United States | 99 |
| China | 90 |
| Florida | 86 |
| Indonesia | 82 |
| Taiwan | 78 |
| United Kingdom | 73 |
| California | 66 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 4 |
| Meets WWC Standards with or without Reservations | 4 |
| Does not meet standards | 1 |
Topczewski, Anna Marie – ProQuest LLC, 2013
Developmental score scales represent the performance of students along a continuum, where as students learn more they move higher along that continuum. Unidimensional item response theory (UIRT) vertical scaling has become a commonly used method to create developmental score scales. Research has shown that UIRT vertical scaling methods can be…
Descriptors: Item Response Theory, Scaling, Scores, Student Development
Herman, Joan; Linn, Robert – National Center for Research on Evaluation, Standards, and Student Testing (CRESST), 2013
Two consortia, the Smarter Balanced Assessment Consortium (Smarter Balanced) and the Partnership for Assessment of Readiness for College and Careers (PARCC), are currently developing comprehensive, technology-based assessment systems to measure students' attainment of the Common Core State Standards (CCSS). The consequences of the consortia…
Descriptors: Consortia, Student Evaluation, Educational Testing, Academic Standards
Mileff, Milo – Bulgarian Comparative Education Society, 2013
In the present paper and the discussion that follows, the author presents aspects of test construction and a careful description of instructional objectives. Constructing tests involves several stages such as describing language objectives, selecting appropriate test task, devising and assembling test tasks, and devising a scoring system for…
Descriptors: Behavioral Objectives, Test Construction, Norm Referenced Tests, Criterion Referenced Tests
Wiley, Colby P.; Wedeking, Travis; Galindo, Addy M. – Journal of Psychoeducational Assessment, 2013
This article reviews the Conners Early Childhood (Conners EC; Conners, 2009), a behavior and development rating scale intended to assess children in early childhood, specifically defined as ages 2 to 6 years. Using multiple informants across multiple settings, the Conners EC is administered for the purpose of early identification of disorders or…
Descriptors: Test Reviews, Rating Scales, Developmental Delays, Disability Identification
Cui, Ying; Roberts, Mary Roduta – Educational Measurement: Issues and Practice, 2013
The goal of this study was to investigate the usefulness of person-fit analysis in validating student score inferences in a cognitive diagnostic assessment. In this study, a two-stage procedure was used to evaluate person fit for a diagnostic test in the domain of statistical hypothesis testing. In the first stage, the person-fit statistic, the…
Descriptors: Scores, Validity, Cognitive Tests, Diagnostic Tests
Hill, Tara M.; Laux, John M.; Stone, Gregory; Dupuy, Paula; Scott, Holly – Journal of Addictions & Offender Counseling, 2013
Rasch analysis of the Substance Abuse Subtle Screening Inventory-3 (SASSI-3; F. G. Miller & Lazowski, 1999) indicated that the SASSI-3 meets fundamental measurement properties; however, the authors of the current study recommend the elimination of nonfunctioning items and the improvement of response options for the face valid scales to…
Descriptors: Test Items, Substance Abuse, Usability, Test Validity
Obinne, A.D.E. – World Journal of Education, 2012
The 3-parameter model of Item Response Theory gives the probability of an individual (examinee) responding correctly to an item without being sure of all the facts. That is known as guessing. Guessing could be a strategy employed by examinees to earn more marks. The way an item is constructed could expose the item to guessing by the examinee. A…
Descriptors: Item Response Theory, Test Items, Guessing (Tests), Probability
Li, Yanmei – ETS Research Report Series, 2012
In a common-item (anchor) equating design, the common items should be evaluated for item parameter drift. Drifted items are often removed. For a test that contains mostly dichotomous items and only a small number of polytomous items, removing some drifted polytomous anchor items may result in anchor sets that no longer resemble mini-versions of…
Descriptors: Scores, Item Response Theory, Equated Scores, Simulation
Gewertz, Catherine – Education Week, 2012
Pondering a math problem while she swings her sneakered feet from a chair, 12-year-old Andrea Guevara is helping researchers design an assessment that will shape the learning of 19 million students. The 8th grader, who came to the United States from Ecuador three years ago, is trying out two ways of providing English-language support on a…
Descriptors: Test Items, Foreign Countries, Feedback (Response), Protocol Analysis
Buckendahl, Chad W.; Davis-Becker, Susan L. – Practical Assessment, Research & Evaluation, 2012
The consequences associated with the uses and interpretations of scores for many credentialing testing programs have important implications for a range of stakeholders. Within licensure settings specifically, results from examination programs are often one of the final steps in the process of assessing whether individuals will be allowed to enter…
Descriptors: Licensing Examinations (Professions), Test Items, Dentistry, Minimum Competency Testing
Li, Feiming; Cohen, Allan; Shen, Linjun – Journal of Educational Measurement, 2012
Computer-based tests (CBTs) often use random ordering of items in order to minimize item exposure and reduce the potential for answer copying. Little research has been done, however, to examine item position effects for these tests. In this study, different versions of a Rasch model and different response time models were examined and applied to…
Descriptors: Computer Assisted Testing, Test Items, Item Response Theory, Models
Wang, Wen-Chung; Jin, Kuan-Yu; Qiu, Xue-Lan; Wang, Lei – Journal of Educational Measurement, 2012
In some tests, examinees are required to choose a fixed number of items from a set of given items to answer. This practice creates a challenge to standard item response models, because more capable examinees may have an advantage by making wiser choices. In this study, we developed a new class of item response models to account for the choice…
Descriptors: Item Response Theory, Test Items, Selection, Models
Frank, Jerrold – English Teaching Forum, 2012
This piece makes a case for using assessment to understand and identify the needs of learners and introduces the three reprints that follow: "Twenty Common Testing Mistakes for EFL Teachers to Avoid," "Coming to Grips with Progress Testing: Some Guidelines for Its Design," and "Purposeful Language Assessment: Selecting the Right Alternative Test."
Descriptors: Testing, English (Second Language), Evaluation, Language Teachers
De Cock, Mieke – Physical Review Special Topics - Physics Education Research, 2012
In this paper, we examine student success on three variants of a test item given in different representational formats (verbal, pictorial, and graphical), with an isomorphic problem statement. We confirm results from recent papers where it is mentioned that physics students' problem-solving competence can vary with representational format and that…
Descriptors: Physics, Problem Solving, Science Tests, Test Items
Diezmann, Carmel M.; Lowrie, Tom – International Journal of Science and Mathematics Education, 2012
Learning to think spatially in mathematics involves developing proficiency with graphics. This paper reports on 2 investigations of spatial thinking and graphics. The first investigation explored the importance of graphics as 1 of 3 communication systems (i.e. text, symbols, graphics) used to provide information in numeracy test items. The results…
Descriptors: Memory, Spatial Ability, Test Items, Numeracy

Direct link
Peer reviewed
