Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 3 |
Descriptor
Author
Attali, Yigal | 1 |
Bridgeman, Brent | 1 |
Drake, Samuel | 1 |
Gu, Lixiong | 1 |
Trapani, Catherine | 1 |
Wolfe, Edward W. | 1 |
Young, I. Phillip | 1 |
Publication Type
Journal Articles | 3 |
Reports - Research | 2 |
Reports - Evaluative | 1 |
Education Level
Higher Education | 3 |
Postsecondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
Graduate Record Examinations | 3 |
Test of English as a Foreign… | 1 |
What Works Clearinghouse Rating
Attali, Yigal; Bridgeman, Brent; Trapani, Catherine – Journal of Technology, Learning, and Assessment, 2010
A generic approach in automated essay scoring produces scores that have the same meaning across all prompts, existing or new, of a writing assessment. This is accomplished by using a single set of linguistic indicators (or features), a consistent way of combining and weighting these features into essay scores, and a focus on features that are not…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Test Scoring Machines
Young, I. Phillip – Journal of Research on Leadership Education, 2008
Empirical studies addressing admission to and graduation from a doctoral program focusing on educational leadership are noticeably absent within the professional literature, and this study seeks to fill partially this void through testing specific hypotheses. Archival data were used to conduct a three group discriminant analyses where the…
Descriptors: Grade Point Average, Predictive Validity, Doctoral Programs, Sampling
Gu, Lixiong; Drake, Samuel; Wolfe, Edward W. – Journal of Technology, Learning, and Assessment, 2006
This study seeks to determine whether item features are related to observed differences in item difficulty (DIF) between computer- and paper-based test delivery media. Examinees responded to 60 quantitative items similar to those found on the GRE general test in either a computer-based or paper-based medium. Thirty-eight percent of the items were…
Descriptors: Test Bias, Test Items, Educational Testing, Student Evaluation