Publication Date
In 2025 | 1 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 8 |
Descriptor
Source
Applied Measurement in… | 1 |
Cogent Education | 1 |
Educational Assessment | 1 |
Journal of Chemical Education | 1 |
Journal of Positive Behavior… | 1 |
Journal of Research on… | 1 |
Large-scale Assessments in… | 1 |
Online Submission | 1 |
Author
Abdullah, Saifuddin Kumar | 1 |
Abedi, Jamal | 1 |
Barron, Ann E. | 1 |
Bernholt, Sascha | 1 |
Daniel M. Bolt | 1 |
Hadenfeldt, Jan C. | 1 |
Hamzah, Mohd Sahandri Gani | 1 |
Hansen, Mary A. | 1 |
Heh, Peter | 1 |
Hohlfeld, Tina N. | 1 |
Jerin Kim | 1 |
More ▼ |
Publication Type
Journal Articles | 8 |
Reports - Research | 6 |
Reports - Evaluative | 2 |
Education Level
Elementary Secondary Education | 8 |
Grade 8 | 4 |
Elementary Education | 3 |
Middle Schools | 3 |
Secondary Education | 3 |
Grade 4 | 2 |
Grade 10 | 1 |
Grade 11 | 1 |
Grade 12 | 1 |
Grade 6 | 1 |
Grade 7 | 1 |
More ▼ |
Audience
Location
California | 1 |
Florida | 1 |
Germany | 1 |
Malaysia | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Trends in International… | 1 |
What Works Clearinghouse Rating
Jerin Kim; Kent McIntosh – Journal of Positive Behavior Interventions, 2025
We aimed to identify empirically valid cut scores on the positive behavioral interventions and supports (PBIS) Tiered Fidelity Inventory (TFI) through an expert panel process known as bookmarking. The TFI is a measurement tool to evaluate the fidelity of implementation of PBIS. In the bookmark method, experts reviewed all TFI items and item scores…
Descriptors: Positive Behavior Supports, Cutting Scores, Fidelity, Program Evaluation
Qi Huang; Daniel M. Bolt; Weicong Lyu – Large-scale Assessments in Education, 2024
Large scale international assessments depend on invariance of measurement across countries. An important consideration when observing cross-national differential item functioning (DIF) is whether the DIF actually reflects a source of bias, or might instead be a methodological artifact reflecting item response theory (IRT) model misspecification.…
Descriptors: Test Items, Item Response Theory, Test Bias, Test Validity
Hansen, Mary A.; Lyon, Steven R.; Heh, Peter; Zigmond, Naomi – Applied Measurement in Education, 2013
Large-scale assessment programs, including alternate assessments based on alternate achievement standards (AA-AAS), must provide evidence of technical quality and validity. This study provides information about the technical quality of one AA-AAS by evaluating the standard setting for the science component. The assessment was designed to have…
Descriptors: Alternative Assessment, Science Tests, Standard Setting, Test Validity
König, Johannes – Cogent Education, 2015
The study aims at developing and exploring a novel video-based assessment that captures classroom management expertise (CME) of teachers and for which statistical results are provided. CME measurement is conceptualized by using four video clips that refer to typical classroom management situations in which teachers are heavily challenged…
Descriptors: Classroom Techniques, Expertise, Video Technology, Teacher Evaluation
Hadenfeldt, Jan C.; Bernholt, Sascha; Liu, Xiufeng; Neumann, Knut; Parchmann, Ilka – Journal of Chemical Education, 2013
Helping students develop a sound understanding of scientific concepts can be a major challenge. Lately, learning progressions have received increasing attention as a means to support students in developing understanding of core scientific concepts. At the center of a learning progression is a sequence of developmental levels reflecting an…
Descriptors: Elementary School Science, Secondary School Science, Science Instruction, Chemistry
Hamzah, Mohd Sahandri Gani; Abdullah, Saifuddin Kumar – Online Submission, 2011
The evaluation of learning is a systematic process involving testing, measuring and evaluation. In the testing step, a teacher needs to choose the best instrument that can test the minds of students. Testing will produce scores or marks with many variations either in homogeneous or heterogeneous forms that will be used to categorize the scores…
Descriptors: Test Items, Item Analysis, Difficulty Level, Testing
Hohlfeld, Tina N.; Ritzhaupt, Albert D.; Barron, Ann E. – Journal of Research on Technology in Education, 2010
This article provides an overview of the development and validation of the Student Tool for Technology Literacy (ST[superscript 2]L). Developing valid and reliable objective performance measures for monitoring technology literacy is important to all organizations charged with equipping students with the technology skills needed to successfully…
Descriptors: Test Validity, Ability Grouping, Grade 8, Test Construction
Abedi, Jamal – Educational Assessment, 2009
This study compared performance of both English language learners (ELLs) and non-ELL students in Grades 4 and 8 under accommodated and nonaccommodated testing conditions. The accommodations used in this study included a computerized administration of a math test with a pop-up glossary, a customized English dictionary, extra testing time, and…
Descriptors: Computer Assisted Testing, Testing Accommodations, Mathematics Tests, Grade 4