Publication Date
| In 2026 | 0 |
| Since 2025 | 220 |
| Since 2022 (last 5 years) | 1089 |
| Since 2017 (last 10 years) | 2599 |
| Since 2007 (last 20 years) | 4960 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 653 |
| Teachers | 563 |
| Researchers | 250 |
| Students | 201 |
| Administrators | 81 |
| Policymakers | 22 |
| Parents | 17 |
| Counselors | 8 |
| Community | 7 |
| Support Staff | 3 |
| Media Staff | 1 |
| More ▼ | |
Location
| Turkey | 226 |
| Canada | 223 |
| Australia | 155 |
| Germany | 116 |
| United States | 99 |
| China | 90 |
| Florida | 86 |
| Indonesia | 82 |
| Taiwan | 78 |
| United Kingdom | 73 |
| California | 66 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 4 |
| Meets WWC Standards with or without Reservations | 4 |
| Does not meet standards | 1 |
King, Gillian; Batorowicz, Beata; Rigby, Patty; McMain-Klein, Margot; Thompson, Laura; Pinto, Madhu – International Journal of Disability, Development and Education, 2014
There is a need for psychometrically sound measures of youth experiences of community/home leisure activity settings. The 22-item Self-Reported Experiences of Activity Settings (SEAS) captures the following experiences of youth with a Grade 3 level of language comprehension or more: Personal Growth, Psychological Engagement, Social Belonging,…
Descriptors: Foreign Countries, Adolescents, Young Adults, Reliability
Kachchaf, Rachel; Noble, Tracy; Rosebery, Ann; Wang, Yang; Warren, Beth; O'Connor, Mary Catherine – Grantee Submission, 2014
Most research on linguistic features of test items negatively impacting English language learners' (ELLs') performance has focused on lexical and syntactic features, rather than discourse features that operate at the level of the whole item. This mixed-methods study identified two discourse features in 162 multiple-choice items on a standardized…
Descriptors: English Language Learners, Science Tests, Test Items, Discourse Analysis
MacDonald, George T. – ProQuest LLC, 2014
A simulation study was conducted to explore the performance of the linear logistic test model (LLTM) when the relationships between items and cognitive components were misspecified. Factors manipulated included percent of misspecification (0%, 1%, 5%, 10%, and 15%), form of misspecification (under-specification, balanced misspecification, and…
Descriptors: Simulation, Item Response Theory, Models, Test Items
Bramley, Tom – Cambridge Assessment, 2014
The aim of this study was to compare models of assessment structure for achieving differentiation between examinees of different levels of attainment in the GCSE in England. GCSEs are high-stakes curriculum-based public examinations taken by 16 year olds at the end of compulsory schooling. The context for the work was an intense period of debate…
Descriptors: Foreign Countries, Exit Examinations, Alternative Assessment, High Stakes Tests
Oon, Pey-Tee; Fan, Xitao – International Journal of Science Education, 2017
Students' attitude towards science (SAS) is often a subject of investigation in science education research. Survey of rating scale is commonly used in the study of SAS. The present study illustrates how Rasch analysis can be used to provide psychometric information of SAS rating scales. The analyses were conducted on a 20-item SAS scale used in an…
Descriptors: Item Response Theory, Psychometrics, Attitude Measures, Rating Scales
Schoen, Robert C.; LaVenia, Mark; Champagne, Zachary M.; Farina, Kristy; Tazaz, Amanda M. – Grantee Submission, 2017
The following report describes an assessment instrument called the Mathematics Performance and Cognition (MPAC) interview. The MPAC interview was designed to measure two outcomes of interest. It was designed to measure first and second graders' mathematics achievement in number, operations, and equality, and it was also designed to gather…
Descriptors: Interviews, Test Construction, Psychometrics, Elementary School Mathematics
Schoen, Robert C.; LaVenia, Mark; Champagne, Zachary M.; Farina, Kristy – Grantee Submission, 2017
This report provides an overview of the development, implementation, and psychometric properties of a student mathematics interview designed to assess first- and second-grade student achievement and thinking processes. The student interview was conducted with 622 first- or second-grade students in 22 schools located in two public school districts…
Descriptors: Interviews, Test Construction, Psychometrics, Elementary School Mathematics
Wu, Mei – English Language Teaching, 2012
This paper compares the Public English Test System (PETS) administered in mainland, China and the General English Proficiency Test (GEPT) administered in Taiwan, from the aspects of test levels, test contents and scoring weight. Compared with the PETS, the GEPT is found to value the English productive skills more, and have a greater ability to…
Descriptors: Foreign Countries, Second Language Instruction, Second Language Learning, Test Items
Sen, Rohini – ProQuest LLC, 2012
In the last five decades, research on the uses of response time has extended into the field of psychometrics (Schnikpe & Scrams, 1999; van der Linden, 2006; van der Linden, 2007), where interest has centered around the usefulness of response time information in item calibration and person measurement within an item response theory. framework.…
Descriptors: Structural Equation Models, Reaction Time, Item Response Theory, Computation
Jaarsveld, Saskia; Lachmann, Thomas; van Leeuwen, Cees – Intelligence, 2012
We recently proposed the Creative Reasoning Test (CRT), a test for reasoning in ill-defined problem spaces. The test asks children who first performed the Standard Progressive Matrices test (SPM) to next generate an SPM-style test item themselves. The item is scored based on different aspects of its complexity. Here we introduce a method to…
Descriptors: Creative Thinking, Creativity Tests, Problem Solving, Children
Chen, Pei-Hua; Chang, Hua-Hua; Wu, Haiyan – Educational and Psychological Measurement, 2012
Two sampling-and-classification-based procedures were developed for automated test assembly: the Cell Only and the Cell and Cube methods. A simulation study based on a 540-item bank was conducted to compare the performance of the procedures with the performance of a mixed-integer programming (MIP) method for assembling multiple parallel test…
Descriptors: Test Items, Selection, Test Construction, Item Response Theory
Hoffman, Lesa; Templin, Jonathan; Rice, Mabel L. – Journal of Speech, Language, and Hearing Research, 2012
Purpose: The present work describes how vocabulary ability as assessed by 3 different forms of the Peabody Picture Vocabulary Test (PPVT; Dunn & Dunn, 1997) can be placed on a common latent metric through item response theory (IRT) modeling, by which valid comparisons of ability between samples or over time can then be made. Method: Responses…
Descriptors: Item Response Theory, Test Format, Vocabulary, Comparative Analysis
Jordan, Sally – Computers & Education, 2012
Students were observed directly, in a usability laboratory, and indirectly, by means of an extensive evaluation of responses, as they attempted interactive computer-marked assessment questions that required free-text responses of up to 20 words and as they amended their responses after receiving feedback. This provided more general insight into…
Descriptors: Learner Engagement, Feedback (Response), Evaluation, Test Interpretation
Kahraman, Nilufer; De Champlain, Andre; Raymond, Mark – Applied Measurement in Education, 2012
Item-level information, such as difficulty and discrimination are invaluable to the test assembly, equating, and scoring practices. Estimating these parameters within the context of large-scale performance assessments is often hindered by the use of unbalanced designs for assigning examinees to tasks and raters because such designs result in very…
Descriptors: Performance Based Assessment, Medicine, Factor Analysis, Test Items
Jensen, Nate; Rice, Andrew; Soland, James – Educational Evaluation and Policy Analysis, 2018
While most educators assume that not all students try their best on achievement tests, no current research examines if behaviors associated with low test effort, like rapidly guessing on test items, affect teacher value-added estimates. In this article, we examined the prevalence of rapid guessing to determine if this behavior varied by grade,…
Descriptors: Item Response Theory, Value Added Models, Achievement Tests, Test Items

Peer reviewed
Direct link
