NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 271 to 285 of 3,709 results Save | Export
Michael Matta; Sterett H. Mercer; Milena A. Keller-Margulis – Grantee Submission, 2022
Written expression curriculum-based measurement (WE-CBM) is a formative assessment approach for screening and progress monitoring. To extend evaluation of WE-CBM, we compared hand-calculated and automated scoring approaches in relation to the number of screening samples needed per student for valid scores, the long-term predictive validity and…
Descriptors: Writing Evaluation, Writing Tests, Predictive Validity, Formative Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Michael Matta; Sterett H. Mercer; Milena A. Keller-Margulis – Assessment in Education: Principles, Policy & Practice, 2022
Written expression curriculum-based measurement (WE-CBM) is a formative assessment approach for screening and progress monitoring. To extend evaluation of WE-CBM, we compared hand-calculated and automated scoring approaches in relation to the number of screening samples needed per student for valid scores, the long-term predictive validity and…
Descriptors: Writing Evaluation, Writing Tests, Predictive Validity, Formative Evaluation
Ann Mantil; John Papay; Preeya Pandya Mbekeani; Richard J. Murnane – Annenberg Institute for School Reform at Brown University, 2022
Preparing K-12 students for careers in science, technology, engineering and mathematics (STEM) fields is an ongoing challenge confronting state policymakers. We examine the implementation of a science graduation testing requirement for high-school students in Massachusetts, beginning with the graduating class of 2010. We find that the design of…
Descriptors: High School Students, STEM Education, STEM Careers, Student Interests
Peer reviewed Peer reviewed
Direct linkDirect link
Parekh, Gillian; Brown, Robert S.; Zheng, Samuel – Educational Policy, 2021
The reporting of students' Learning Skills on the Ontario provincial report card provides educators and families with insight into students' work habits. However, the evaluation process is highly subjective. This study explores teachers' perceptions around student learning across demographic and institutional factors. This exploratory study is the…
Descriptors: Foreign Countries, Study Habits, Student Evaluation, Student Characteristics
Peer reviewed Peer reviewed
Direct linkDirect link
Muhammad, Gholnecsar E.; Ortiz, Nickolaus A.; Neville, Mary L. – Reading Teacher, 2021
According to data from the National Assessment of Educational Progress, or NAEP, the U.S. educational system has consistently failed Black and Brown children across both reading and mathematics. Educational research has further uncovered the ways that reading and mathematics assessment and curriculum are often biased and culturally and…
Descriptors: National Competency Tests, Reading Achievement, Mathematics Achievement, African American Students
Peer reviewed Peer reviewed
Direct linkDirect link
Ascenzi-Moreno, Laura; Seltzer, Kate – Journal of Literacy Research, 2021
Recent scholarship has identified how the reading assessment process can be improved by adapting to and accounting for emergent bilinguals' multilingual resources. While this work provides guidance about how teachers can take this approach within their assessment practices, this article strengthens and builds on this scholarship by combining…
Descriptors: Ideology, Student Evaluation, Reading Tests, Bilingual Students
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Balbuena, Sherwin E.; Maligalig, Dalisay S.; Quimbo, Maria Ana T. – Online Submission, 2021
The University Student Depression Inventory (USDI; Khawaja and Bryden 2006) is a 30- item scale that is used to measure depressive symptoms among university students. Its psychometric properties have been widely investigated under the classical test theory (CTT). This study explored the application of the polytomous Rasch partial credit model…
Descriptors: Item Response Theory, Likert Scales, College Students, Depression (Psychology)
Areekkuzhiyil, Santhosh – Online Submission, 2021
Assessment is an integral part of any teaching learning process. Assessment has large number of functions to perform, whether it is formative or summative. This paper analyse the issues involved and the areas of concern in the classroom assessment practice and discusses the recent reforms take place. [This paper was published in Edutracks v20 n8…
Descriptors: Student Evaluation, Formative Evaluation, Summative Evaluation, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Sanguras, Laila Y.; Gibson, Shavonne D.; Haqqi, Hamza S.; Torres, Angie M. – AERA Online Paper Repository, 2021
Minority studies are underrepresented in gifted and talented education programs across the nation and the methods used to identify students for advanced services may be the issue. This study examined the Scales for Identifying Gifted Students (SIGS), a set of nationally normed behavior rating scales, for the purpose of updating the instrument. The…
Descriptors: Gifted, Academically Gifted, Talent Identification, Measures (Individuals)
Wenjing Guo – ProQuest LLC, 2021
Constructed response (CR) items are widely used in large-scale testing programs, including the National Assessment of Educational Progress (NAEP) and many district and state-level assessments in the United States. One unique feature of CR items is that they depend on human raters to assess the quality of examinees' work. The judgment of human…
Descriptors: National Competency Tests, Responses, Interrater Reliability, Error of Measurement
Alexander James Kwako – ProQuest LLC, 2023
Automated assessment using Natural Language Processing (NLP) has the potential to make English speaking assessments more reliable, authentic, and accessible. Yet without careful examination, NLP may exacerbate social prejudices based on gender or native language (L1). Current NLP-based assessments are prone to such biases, yet research and…
Descriptors: Gender Bias, Natural Language Processing, Native Language, Computational Linguistics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Amirian, Seyed Mohammad Reza – International Journal of Language Testing, 2020
The purpose of the present study was two-fold: (a) First, it examined fairness of Special English Test (SET) of Iranian National University Entrance Exam (INUEE) by analyzing Differential Item Functioning (DIF) with reading comprehension section of this test (b) second, it explored test takers' attitudes towards possible sources of unfairness and…
Descriptors: Reading Comprehension, Test Bias, English for Special Purposes, Language Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Gübes, Nese; Uyar, Seyma – International Journal of Progressive Education, 2020
This study aims to compare the performance of different small sample equating methods in the presence and absence of differential item functioning (DIF) in common items. In this research, Tucker linear equating, Levine linear equating, unsmoothed and pre-smoothed (C=4) chained equipercentile equating, and simplified circle arc equating methods…
Descriptors: Test Bias, Equated Scores, Test Items, Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Geiger, Tray J.; Amrein-Beardsley, Audrey; Holloway, Jessica – Educational Assessment, Evaluation and Accountability, 2020
For this study, researchers critically reviewed documents pertaining to the highest profile of the 15 teacher evaluation lawsuits that occurred throughout the U.S. as pertaining to the use of student test scores to evaluate teachers. In New Mexico, teacher plaintiffs contested how they were being evaluated and held accountable using a homegrown…
Descriptors: Court Litigation, Teacher Responsibility, Accountability, Value Added Models
Peer reviewed Peer reviewed
Direct linkDirect link
Kopp, Jason P.; Jones, Andrew T. – Applied Measurement in Education, 2020
Traditional psychometric guidelines suggest that at least several hundred respondents are needed to obtain accurate parameter estimates under the Rasch model. However, recent research indicates that Rasch equating results in accurate parameter estimates with sample sizes as small as 25. Item parameter drift under the Rasch model has been…
Descriptors: Item Response Theory, Psychometrics, Sample Size, Sampling
Pages: 1  |  ...  |  15  |  16  |  17  |  18  |  19  |  20  |  21  |  22  |  23  |  ...  |  248