NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)1
Since 2016 (last 10 years)1
Since 2006 (last 20 years)10
Audience
Location
South Korea1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 10 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko – Measurement: Interdisciplinary Research and Perspectives, 2023
This software review discusses the capabilities of Stata to conduct item response theory modeling. The commands needed for fitting the popular one-, two-, and three-parameter logistic models are initially discussed. The procedure for testing the discrimination parameter equality in the one-parameter model is then outlined. The commands for fitting…
Descriptors: Item Response Theory, Models, Comparative Analysis, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Small, Ruth V.; Arnone, Marilyn P. – Knowledge Quest, 2014
Just as with print resources, as the number of Web-based resources continues to soar, the need to evaluate them has become a critical information skill for both children and adults. This is particularly true for schools where librarians often are called on to recommend Web resources to classroom teachers, parents, and students, and to support…
Descriptors: Web Sites, Computer Software Evaluation, Measurement Techniques, Motivation
Peer reviewed Peer reviewed
Direct linkDirect link
Ramineni, Chaitanya – Assessing Writing, 2013
In this paper, I describe the design and evaluation of automated essay scoring (AES) models for an institution's writing placement program. Information was gathered on admitted student writing performance at a science and technology research university in the northeastern United States. Under timed conditions, first-year students (N = 879) were…
Descriptors: Validity, Comparative Analysis, Internet, Student Placement
Peer reviewed Peer reviewed
Direct linkDirect link
Klobucar, Andrew; Elliot, Norbert; Deess, Perry; Rudniy, Oleksandr; Joshi, Kamal – Assessing Writing, 2013
This study investigated the use of automated essay scoring (AES) to identify at-risk students enrolled in a first-year university writing course. An application of AES, the "Criterion"[R] Online Writing Evaluation Service was evaluated through a methodology focusing on construct modelling, response processes, disaggregation, extrapolation,…
Descriptors: Writing Evaluation, Scoring, Writing Instruction, Essays
Peer reviewed Peer reviewed
Direct linkDirect link
McCurry, Doug – Assessing Writing, 2010
This article considers the claim that machine scoring of writing test responses agrees with human readers as much as humans agree with other humans. These claims about the reliability of machine scoring of writing are usually based on specific and constrained writing tasks, and there is reason for asking whether machine scoring of writing requires…
Descriptors: Writing Tests, Scoring, Interrater Reliability, Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kim, Seong-in; Hameed, Ibrahim A. – Art Therapy: Journal of the American Art Therapy Association, 2009
For mental health professionals, art assessment is a useful tool for patient evaluation and diagnosis. Consideration of various color-related elements is important in art assessment. This correlational study introduces the concept of variety of color as a new color-related element of an artwork. This term represents a comprehensive use of color,…
Descriptors: Mental Health Workers, Essays, Scoring, Visual Stimuli
Peer reviewed Peer reviewed
Direct linkDirect link
Gibbs, William J.; Bernas, Ronan S. – Journal of Computing in Higher Education, 2007
This descriptive pilot study employed the Grascha-Riechmann Student Learning Style Scale (GRSLSS) to examine student communication and interactions in an online educational discussion that occurred for fourteen days. Discussion activity exhibited conversational turns and messages appeared as conversation rather than expository. Individuals scoring…
Descriptors: Cognitive Style, Computer Software Evaluation, Measures (Individuals), Scoring
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Rudner, Lawrence M.; Garcia, Veronica; Welch, Catherine – Journal of Technology, Learning, and Assessment, 2006
This report provides a two-part evaluation of the IntelliMetric[SM] automated essay scoring system based on its performance scoring essays from the Analytic Writing Assessment of the Graduate Management Admission Test[TM] (GMAT[TM]). The IntelliMetric system performance is first compared to that of individual human raters, a Bayesian system…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Essays
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Attali, Yigal; Burstein, Jill – Journal of Technology, Learning, and Assessment, 2006
E-rater[R] has been used by the Educational Testing Service for automated essay scoring since 1999. This paper describes a new version of e-rater (V.2) that is different from other automated essay scoring systems in several important respects. The main innovations of e-rater V.2 are a small, intuitive, and meaningful set of features used for…
Descriptors: Educational Testing, Test Scoring Machines, Scoring, Writing Evaluation
Chung, Gregory K. W. K.; Baker, Eva L.; Brill, David G.; Sinha, Ravi; Saadat, Farzad; Bewley, William L. – National Center for Research on Evaluation, Standards, and Student Testing (CRESST), 2006
A critical first step in developing training systems is gathering quality information about a trainee's competency in a skill or knowledge domain. Such information includes an estimate of what the trainee knows prior to training, how much has been learned from training, how well the trainee may perform in future task situations, and whether to…
Descriptors: Distance Education, Skill Analysis, Knowledge Level, Prior Learning