Publication Date
In 2025 | 1 |
Since 2024 | 5 |
Since 2021 (last 5 years) | 14 |
Since 2016 (last 10 years) | 33 |
Since 2006 (last 20 years) | 45 |
Descriptor
Error Patterns | 61 |
Scoring | 61 |
Foreign Countries | 12 |
Computer Assisted Testing | 11 |
Evaluation Methods | 10 |
Graduate Students | 10 |
Scores | 9 |
Accuracy | 8 |
Comparative Analysis | 8 |
English (Second Language) | 8 |
Writing Evaluation | 8 |
More ▼ |
Source
Author
Tatsuoka, Kikumi K. | 5 |
Lockwood, Adam B. | 2 |
Mrazik, Martin | 2 |
Slate, John R. | 2 |
Akihito Kamata | 1 |
Alex J. Mechaber | 1 |
Alfonso, Vincent C. | 1 |
Allen, Laura K. | 1 |
Almusharraf, Norah | 1 |
Alonzo, Julie | 1 |
Alotaibi, Hind | 1 |
More ▼ |
Publication Type
Reports - Research | 61 |
Journal Articles | 50 |
Speeches/Meeting Papers | 5 |
Numerical/Quantitative Data | 1 |
Tests/Questionnaires | 1 |
Education Level
Higher Education | 17 |
Postsecondary Education | 14 |
Elementary Education | 7 |
Secondary Education | 5 |
Middle Schools | 4 |
Junior High Schools | 3 |
Grade 6 | 2 |
Grade 7 | 2 |
Grade 8 | 2 |
High Schools | 2 |
Early Childhood Education | 1 |
More ▼ |
Audience
Practitioners | 1 |
Researchers | 1 |
Location
Canada | 2 |
China | 2 |
United States | 2 |
Australia | 1 |
Bosnia and Herzegovina | 1 |
Italy | 1 |
Japan | 1 |
Mongolia | 1 |
Taiwan | 1 |
United Kingdom | 1 |
Laws, Policies, & Programs
Individuals with Disabilities… | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Matt Homer – Advances in Health Sciences Education, 2024
Quantitative measures of systematic differences in OSCE scoring across examiners (often termed examiner stringency) can threaten the validity of examination outcomes. Such effects are usually conceptualised and operationalised based solely on checklist/domain scores in a station, and global grades are not often used in this type of analysis. In…
Descriptors: Examiners, Scoring, Validity, Cutting Scores
Peter Baldwin; Victoria Yaneva; Kai North; Le An Ha; Yiyun Zhou; Alex J. Mechaber; Brian E. Clauser – Journal of Educational Measurement, 2025
Recent developments in the use of large-language models have led to substantial improvements in the accuracy of content-based automated scoring of free-text responses. The reported accuracy levels suggest that automated systems could have widespread applicability in assessment. However, before they are used in operational testing, other aspects of…
Descriptors: Artificial Intelligence, Scoring, Computational Linguistics, Accuracy
Xin Qiao; Akihito Kamata; Cornelis Potgieter – Grantee Submission, 2024
Oral reading fluency (ORF) assessments are commonly used to screen at-risk readers and evaluate interventions' effectiveness as curriculum-based measurements. Similar to the standard practice in item response theory (IRT), calibrated passage parameter estimates are currently used as if they were population values in model-based ORF scoring.…
Descriptors: Oral Reading, Reading Fluency, Error Patterns, Scoring
Mark White; Matt Ronfeldt – Educational Assessment, 2024
Standardized observation systems seek to reliably measure a specific conceptualization of teaching quality, managing rater error through mechanisms such as certification, calibration, validation, and double-scoring. These mechanisms both support high quality scoring and generate the empirical evidence used to support the scoring inference (i.e.,…
Descriptors: Interrater Reliability, Quality Control, Teacher Effectiveness, Error Patterns
Li, Liang-Yi; Huang, Wen-Lung – Educational Technology & Society, 2023
With the increasing bandwidth, videos have been gradually used as submissions for online peer assessment activities. However, their transient nature imposes a high cognitive load on students, particularly lowability students. Therefore, reviewers' ability is a key factor that may affect the reviewing process and performance in an online video peer…
Descriptors: Peer Evaluation, Undergraduate Students, Video Technology, Evaluation Methods
Almusharraf, Norah; Alotaibi, Hind – Technology, Knowledge and Learning, 2023
Evaluating written texts is believed to be a time-consuming process that can lack consistency and objectivity. Automated essay scoring (AES) can provide solutions to some of the limitations of human scoring. This research aimed to evaluate the performance of one AES system, Grammarly, in comparison to human raters. Both approaches' performances…
Descriptors: Writing Evaluation, Writing Tests, Essay Tests, Essays
Lockwood, Adam B.; Klatka, Kelsey; Freeman, Kelli; Farmer, Ryan L.; Benson, Nicholas – Journal of Psychoeducational Assessment, 2023
Sixty-three Woodcock-Johnson IV Tests of Achievement protocols, administered by 26 school psychology trainees, were examined to determine the frequency of examiner errors. Errors were noted on all protocols and ranged from 8 to 150 per administration. Critical (e.g., start, stop, and calculation) errors were noted on roughly 97% of protocols.…
Descriptors: Achievement Tests, School Psychology, Counselor Training, Trainees
Botarleanu, Robert-Mihai; Dascalu, Mihai; Allen, Laura K.; Crossley, Scott Andrew; McNamara, Danielle S. – Grantee Submission, 2022
Automated scoring of student language is a complex task that requires systems to emulate complex and multi-faceted human evaluation criteria. Summary scoring brings an additional layer of complexity to automated scoring because it involves two texts of differing lengths that must be compared. In this study, we present our approach to automate…
Descriptors: Automation, Scoring, Documentation, Likert Scales
Baral, Sami; Botelho, Anthony F.; Erickson, John A.; Benachamardi, Priyanka; Heffernan, Neil T. – International Educational Data Mining Society, 2021
Open-ended questions in mathematics are commonly used by teachers to monitor and assess students' deeper conceptual understanding of content. Student answers to these types of questions often exhibit a combination of language, drawn diagrams and tables, and mathematical formulas and expressions that supply teachers with insight into the processes…
Descriptors: Scoring, Automation, Mathematics Tests, Student Evaluation
Corcoran, Stephanie – Contemporary School Psychology, 2022
With the iPad-mediated cognitive assessment gaining popularity with school districts and the need for alternative modes for training and instruction during this COVID-19 pandemic, school psychology training programs will need to adapt to effectively train their students to be competent in administering, scoring, an interpreting cognitive…
Descriptors: School Psychologists, Professional Education, Job Skills, Cognitive Tests
Lockwood, Adam B.; Sealander, Karen; Gross, Thomas J.; Lanterman, Christopher – Journal of Psychoeducational Assessment, 2020
Achievement tests are used to make high-stakes (e.g., special education placement) decisions, and previous research on norm-referenced assessment suggests that errors are ubiquitous. In our study of 42 teacher trainees, utilizing five of the six core subtests of the Kaufman Test of Educational Achievement, Third Edition (KTEA-3), we found that…
Descriptors: Achievement Tests, Preservice Teachers, Testing, Scoring
Henbest, Victoria S.; Apel, Kenn – Language, Speech, and Hearing Services in Schools, 2021
Purpose: As an initial step in determining whether a spelling error analysis might be useful in measuring children's linguistic knowledge, the relation between the frequency of types of scores from a spelling error analysis and children's performance on measures of phonological and orthographic pattern awareness was examined. Method: The spellings…
Descriptors: Elementary School Students, Grade 1, Spelling, Orthographic Symbols
Oak, Erika; Viezel, Kathleen D.; Dumont, Ron; Willis, John – Journal of Psychoeducational Assessment, 2019
Individuals trained in the use of cognitive tests should be able to complete an assessment without making administrative, scoring, or recording errors. However, an examination of 295 Wechsler protocols completed by graduate students and practicing school psychologists revealed that errors are the norm, not the exception. The most common errors…
Descriptors: Intelligence Tests, Children, Adults, Testing
Chan, Sathena; May, Lyn – Language Testing, 2023
Despite the increased use of integrated tasks in high-stakes academic writing assessment, research on rating criteria which reflect the unique construct of integrated summary writing skills is comparatively rare. Using a mixed-method approach of expert judgement, text analysis, and statistical analysis, this study examines writing features that…
Descriptors: Scoring, Writing Evaluation, Reading Tests, Listening Skills
Treiman, Rebecca; Kessler, Brett; Caravolas, Markéta – Journal of Research in Reading, 2019
Background: Children's spellings are often scored as correct or incorrect, but other measures may be better predictors of later spelling performance. Method: We examined seven measures of spelling in Reception Year and Year 1 (5-6 years old) as predictors of performance on a standardised spelling test in Year 2 (age 7). Results: Correctness was…
Descriptors: Spelling, Scoring, Predictor Variables, Elementary School Students