NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 8 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Severo, Milton; Gaio, A. Rita; Povo, Ana; Silva-Pereira, Fernanda; Ferreira, Maria Amélia – Anatomical Sciences Education, 2015
In theory the formula scoring methods increase the reliability of multiple-choice tests in comparison with number-right scoring. This study aimed to evaluate the impact of the formula scoring method in clinical anatomy multiple-choice examinations, and to compare it with that from the number-right scoring method, hoping to achieve an…
Descriptors: Anatomy, Multiple Choice Tests, Scoring, Decision Making
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Jancarík, Antonín; Kostelecká, Yvona – Electronic Journal of e-Learning, 2015
Electronic testing has become a regular part of online courses. Most learning management systems offer a wide range of tools that can be used in electronic tests. With respect to time demands, the most efficient tools are those that allow automatic assessment. The presented paper focuses on one of these tools: matching questions in which one…
Descriptors: Online Courses, Computer Assisted Testing, Test Items, Scoring Formulas
Peer reviewed Peer reviewed
Direct linkDirect link
Barkaoui, Khaled – Assessment in Education: Principles, Policy & Practice, 2011
This study examined the effects of marking method and rater experience on ESL (English as a Second Language) essay test scores and rater performance. Each of 31 novice and 29 experienced raters rated a sample of ESL essays both holistically and analytically. Essay scores were analysed using a multi-faceted Rasch model to compare test-takers'…
Descriptors: Writing Evaluation, Writing Tests, Essay Tests, Interrater Reliability
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Baldi, Stephane, Ed.; Kutner, Mark; Greenberg, Elizabeth; Jin, Ying; Baer, Justin; Moore, Elizabeth; Dunleavy, Eric; Berlin, Martha; Mohadjer, Leyla; Binzer, Greg; Krenzke, Thomas; Hogan, Jacqueline; Amsbary, Michelle; Forsyth, Barbara; Clark, Lyn; Annis, Terri; Bernstein, Jared; White, Sheida – National Center for Education Statistics, 2009
The 2003 National Assessment of Adult Literacy (NAAL) assessed the English literacy skills of a nationally representative sample of more than 19,000 U.S. adults (age 16 and older) residing in households and correctional institutions. NAAL is the first national assessment of adult literacy since the 1992 National Adult Literacy Survey (NALS). The…
Descriptors: Correctional Institutions, Scaling, Numeracy, Field Tests
Peer reviewed Peer reviewed
Hsu, Louis M. – Educational and Psychological Measurement, 1979
Though the Paired-Item-Score (Eakin and Long) (EJ 174 780) method of scoring true-false tests has certain advantages over the traditional scoring methods (percentage right and right minus wrong), these advantages are attained at the cost of a larger risk of misranking the examinees. (Author/BW)
Descriptors: Comparative Analysis, Guessing (Tests), Objective Tests, Probability
Peer reviewed Peer reviewed
Stuhlmann, Janice; Daniel, Cathy; Dellinger, Amy; Denny, R. Kenton; Powers, Taylor – Reading Psychology, 1999
Investigates whether training raters to interpret the scoring dimensions on a rubric would increase reliability. Compares two groups of kindergarten and first-grade teachers: one group with training, one without. Finds that training increases raters' abilities to reliably interpret scoring items. (SC)
Descriptors: Childrens Writing, Comparative Analysis, Generalizability Theory, Grade 1
Larkin, Kevin C.; Weiss, David J. – 1975
A 15-stage pyramidal test and a 40-item two-stage test were constructed and administered by computer to 111 college undergraduates. The two-stage test was found to utilize a smaller proportion of its potential score range than the pyramidal test. Score distributions for both tests were positively skewed but not significantly different from the…
Descriptors: Ability, Aptitude Tests, Comparative Analysis, Computer Programs
Wolfe, Edward; And Others – 1993
The two studies described here compare essays composed on word processors with those composed with pen and paper for a standardized writing assessment. The following questions guided these studies: (1) Are there differences in test administration and writing processes associated with handwritten versus word-processor writing assessments? (2) Are…
Descriptors: Adults, Comparative Analysis, Computer Uses in Education, Essays