Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 3 |
Since 2016 (last 10 years) | 5 |
Since 2006 (last 20 years) | 8 |
Descriptor
Computer Assisted Testing | 9 |
Graduate Students | 9 |
Scoring | 9 |
Correlation | 3 |
Error Patterns | 3 |
Evaluation Methods | 3 |
Feedback (Response) | 3 |
Testing | 3 |
Comparative Analysis | 2 |
Intelligence Tests | 2 |
Scores | 2 |
More ▼ |
Source
ProQuest LLC | 2 |
College Student Journal | 1 |
Computer Assisted Language… | 1 |
Contemporary School Psychology | 1 |
ETS Research Report Series | 1 |
Journal of Psychoeducational… | 1 |
Journal of Speech, Language,… | 1 |
Author
Publication Type
Journal Articles | 6 |
Reports - Research | 5 |
Dissertations/Theses -… | 2 |
Reports - Evaluative | 2 |
Tests/Questionnaires | 1 |
Education Level
Higher Education | 8 |
Postsecondary Education | 4 |
Audience
Location
New York (New York) | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Wechsler Intelligence Scale… | 3 |
Graduate Record Examinations | 1 |
What Works Clearinghouse Rating
Corcoran, Stephanie – Contemporary School Psychology, 2022
With the iPad-mediated cognitive assessment gaining popularity with school districts and the need for alternative modes for training and instruction during this COVID-19 pandemic, school psychology training programs will need to adapt to effectively train their students to be competent in administering, scoring, an interpreting cognitive…
Descriptors: School Psychologists, Professional Education, Job Skills, Cognitive Tests
Klein, Michael – ProQuest LLC, 2019
The purpose of the current study was to examine the differences between number and types of administration and scoring errors made by administration method (digital/Q-Interactive vs. paper-and-pencil) on the Wechsler Intelligence Scales for Children (WISC-V). WISC-V administration and scoring checklists were developed in order to provide an…
Descriptors: Intelligence Tests, Children, Test Format, Computer Assisted Testing
Conijn, Rianne; Martinez-Maldonado, Roberto; Knight, Simon; Buckingham Shum, Simon; Van Waes, Luuk; van Zaanen, Menno – Computer Assisted Language Learning, 2022
Current writing support tools tend to focus on assessing final or intermediate products, rather than the writing process. However, sensing technologies, such as keystroke logging, can enable provision of automated feedback during, and on aspects of, the writing process. Despite this potential, little is known about the critical indicators that can…
Descriptors: Automation, Feedback (Response), Writing Evaluation, Learning Analytics
Dalton, Sarah Grace; Stark, Brielle C.; Fromm, Davida; Apple, Kristen; MacWhinney, Brian; Rensch, Amanda; Rowedder, Madyson – Journal of Speech, Language, and Hearing Research, 2022
Purpose: The aim of this study was to advance the use of structured, monologic discourse analysis by validating an automated scoring procedure for core lexicon (CoreLex) using transcripts. Method: Forty-nine transcripts from persons with aphasia and 48 transcripts from persons with no brain injury were retrieved from the AphasiaBank database. Five…
Descriptors: Validity, Discourse Analysis, Databases, Scoring
Breyer, F. Jay; Rupp, André A.; Bridgeman, Brent – ETS Research Report Series, 2017
In this research report, we present an empirical argument for the use of a contributory scoring approach for the 2-essay writing assessment of the analytical writing section of the "GRE"® test in which human and machine scores are combined for score creation at the task and section levels. The approach was designed to replace a currently…
Descriptors: College Entrance Examinations, Scoring, Essay Tests, Writing Evaluation
Kuentzel, Jeffrey G.; Hetterscheidt, Lesley A.; Barnett, Douglas – Journal of Psychoeducational Assessment, 2011
The rigors of standardized testing make for numerous opportunities for examiner error, including simple computational mistakes in scoring. Although experts recommend that test scoring be double-checked, the extent to which independent double-checking would reduce scoring errors is not known. A double-checking procedure was established at a…
Descriptors: Feedback (Response), Intelligence, Testing, Standardized Tests
Dosch, Michael P. – ProQuest LLC, 2010
The general aim of the present retrospective study was to examine the test mode effect, that is, the difference in performance when tests are taken on computer (CBT), or by paper and pencil (PnP). The specific purpose was to examine the degree to which extensive practice in CBT in graduate students in nurse anesthesia would raise scores on a…
Descriptors: Feedback (Response), Graduate Students, Grade Point Average, Nurses
Miller, Mark J.; Cowger, Ernest, Jr.; Young, Tony; Tobacyk, Jerome; Sheets, Tillman; Loftus, Christina – College Student Journal, 2008
This study examined the degree of similarity between scores on the Self-Directed Search and an online instrument measuring Holland types. A relatively high congruency score was found between the two measures. Implications for career counselors are discussed.
Descriptors: Career Counseling, Personality Assessment, Congruence (Psychology), Personality Traits
Bennett, Randy Elliot; Rock, Donald A. – 1993
Formulating-Hypotheses (F-H) items present a situation and ask the examinee to generate as many explanations for it as possible. This study examined the generalizability, validity, and examinee perceptions of a computer-delivered version of the task. Eight F-H questions were administered to 192 graduate students. Half of the items restricted…
Descriptors: Computer Assisted Testing, Difficulty Level, Generalizability Theory, Graduate Students