NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 5 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Meagan Karvonen; Russell Swinburne Romine; Amy K. Clark – Practical Assessment, Research & Evaluation, 2024
This paper describes methods and findings from student cognitive labs, teacher cognitive labs, and test administration observations as evidence evaluated in a validity argument for a computer-based alternate assessment for students with significant cognitive disabilities. Validity of score interpretations and uses for alternate assessments based…
Descriptors: Students with Disabilities, Intellectual Disability, Severe Disabilities, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Arslan, Burcu; Jiang, Yang; Keehner, Madeleine; Gong, Tao; Katz, Irvin R.; Yan, Fred – Educational Measurement: Issues and Practice, 2020
Computer-based educational assessments often include items that involve drag-and-drop responses. There are different ways that drag-and-drop items can be laid out and different choices that test developers can make when designing these items. Currently, these decisions are based on experts' professional judgments and design constraints, rather…
Descriptors: Test Items, Computer Assisted Testing, Test Format, Decision Making
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lynch, Sarah – Practical Assessment, Research & Evaluation, 2022
In today's digital age, tests are increasingly being delivered on computers. Many of these computer-based tests (CBTs) have been adapted from paper-based tests (PBTs). However, this change in mode of test administration has the potential to introduce construct-irrelevant variance, affecting the validity of score interpretations. Because of this,…
Descriptors: Computer Assisted Testing, Tests, Scores, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Bejar, Isaac I. – Assessment in Education: Principles, Policy & Practice, 2011
Automated scoring of constructed responses is already operational in several testing programmes. However, as the methodology matures and the demand for the utilisation of constructed responses increases, the volume of automated scoring is likely to increase at a fast pace. Quality assurance and control of the scoring process will likely be more…
Descriptors: Evidence, Quality Control, Scoring, Quality Assurance
Peer reviewed Peer reviewed
Grant, Carolyn D.; Nash, Michael R. – Psychological Assessment, 1995
In a counterbalanced, within subjects, repeated measures design, 130 undergraduates were administered the Computer-Assisted Hypnosis Scale (CAHS) and the Stanford Hypnotic Susceptibility Scale and were hypnotized. The CAHS was shown to be a psychometrically sound instrument for measuring hypnotic ability. (SLD)
Descriptors: Ability, Clinical Diagnosis, Computer Assisted Testing, Diagnostic Tests