Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 4 |
Descriptor
Computer Assisted Testing | 6 |
Predictive Validity | 6 |
Scoring | 3 |
Correlation | 2 |
Essays | 2 |
Student Placement | 2 |
Teacher Attitudes | 2 |
Test Reliability | 2 |
Undergraduate Students | 2 |
Accountability | 1 |
Adaptive Testing | 1 |
More ▼ |
Author
James, Cindy L. | 2 |
Alexander, Cara J. | 1 |
Crescini, Weronika M. | 1 |
Divgi, D. R. | 1 |
Flory, Michael | 1 |
Jones, Marshall B. | 1 |
Juskewitch, Justin E. | 1 |
Lachman, Nirusha | 1 |
Pawlina, Wojciech | 1 |
Sun, Chris | 1 |
Publication Type
Reports - Evaluative | 6 |
Journal Articles | 4 |
Information Analyses | 1 |
Education Level
Higher Education | 2 |
Postsecondary Education | 1 |
Audience
Location
Minnesota | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Armed Services Vocational… | 2 |
ACT Assessment | 1 |
COMPASS (Computer Assisted… | 1 |
SAT (College Admission Test) | 1 |
What Works Clearinghouse Rating
Flory, Michael; Sun, Chris – CNA Corporation, 2017
The Every Student Succeeds Act (ESSA) provides greater flexibility in state accountability systems than did previous federal legislation. In response, many states continue to refine their accountability systems to include college readiness tests, including college admissions and placement exams. This paper summarizes perspectives of K-12…
Descriptors: College Readiness, College Entrance Examinations, Student Placement, Educational Legislation
James, Cindy L. – Assessing Writing, 2008
The scoring of student essays by computer has generated much debate and subsequent research. The majority of the research thus far has focused on validating the automated scoring tools by comparing the electronic scores to human scores of writing or other measures of writing skills, and exploring the predictive validity of the automated scores.…
Descriptors: Predictive Validity, Scoring, Electronic Equipment, Essays
Alexander, Cara J.; Crescini, Weronika M.; Juskewitch, Justin E.; Lachman, Nirusha; Pawlina, Wojciech – Anatomical Sciences Education, 2009
The goals of our study were to determine the predictive value and usability of an audience response system (ARS) as a knowledge assessment tool in an undergraduate medical curriculum. Over a three year period (2006-2008), data were collected from first year didactic blocks in Genetics/Histology and Anatomy/Radiology (n = 42-50 per class). During…
Descriptors: Feedback (Response), Medical Education, Audience Response, Genetics

Divgi, D. R. – Applied Psychological Measurement, 1989
Two methods for estimating the reliability of a computerized adaptive test (CAT) without using item response theory are presented. The data consist of CAT and paper-and-pencil scores from identical or equivalent samples, and scores for all examinees on one or more covariates, using the Armed Services Vocational Aptitude Battery. (TJH)
Descriptors: Adaptive Testing, Computer Assisted Testing, Estimation (Mathematics), Predictive Validity
James, Cindy L. – Assessing Writing, 2006
How do scores from writing samples generated by computerized essay scorers compare to those generated by ''untrained'' human scorers and what combination of scores, if any, is more accurate at placing students in composition courses? This study endeavored to answer this two-part question by evaluating the correspondence between writing sample…
Descriptors: Writing (Composition), Predictive Validity, Scoring, Validity
Jones, Marshall B. – 1991
The microcomputer has increased interest in performance testing, which samples what a person can do rather than what he or she knows. Conventional psychometric theory is based on knowledge tests, but in performance testing the unit of analysis is a trial, and it is unreasonable to assume that mean performance and interim correlations are…
Descriptors: Computer Assisted Testing, Higher Education, Military Personnel, Performance Based Assessment