NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)1
Since 2006 (last 20 years)9
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 40 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Gamage, Kelum A. A.; Silva, Erandika K. de; Gunawardhana, Nanda – Education Sciences, 2020
Globally, the number of COVID-19 cases continues to rise daily despite strict measures being adopted by many countries. Consequently, universities closed down to minimise the face-to-face contacts, and the majority of the universities are now conducting degree programmes through online delivery. Remote online delivery and assessment are novel…
Descriptors: Online Courses, Student Evaluation, COVID-19, Pandemics
Peer reviewed Peer reviewed
Direct linkDirect link
Shiralkar, Malan T.; Harris, Toi B.; Eddins-Folensbee, Florence F.; Coverdale, John H. – Academic Psychiatry, 2013
Objective: Because medical students experience a considerable amount of stress during training, academic leaders have recognized the importance of developing stress-management programs for medical students. The authors set out to identify all controlled trials of stress-management interventions and determine the efficacy of those interventions.…
Descriptors: Pass Fail Grading, Outcome Measures, Stress Management, Feedback (Response)
Texas Education Agency, 2015
This annual report provides information for the 2012-13 school year on grade-level retention and student performance in the Texas public school system. Student retention and promotion data are reported with data on the performance of students in Grades 3-8 on the State of Texas Assessments of Academic Readiness (STAAR) reading and mathematics…
Descriptors: Grade Repetition, Public Schools, State Standards, Standardized Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Tanilon, Jenny; Segers, Mien; Vedder, Paul; Tillema, Harm – Studies in Educational Evaluation, 2009
This study illustrates the development and validation of an admission test, labeled as Performance Samples on Academic Tasks in Educational Sciences (PSAT-Ed), designed to assess samples of performance on academic tasks characteristic of those that would eventually be encountered by examinees in an Educational Sciences program. The test was based…
Descriptors: Academic Achievement, Factor Analysis, Sciences, Pass Fail Grading
Peer reviewed Peer reviewed
Direct linkDirect link
White, Casey B.; Fantone, Joseph C. – Advances in Health Sciences Education, 2010
Traditionally, medical schools have tended to make assumptions that students will "automatically" engage in self-education effectively after graduation and subsequent training in residency and fellowships. In reality, the majority of medical graduates out in practice feel unprepared for learning on their own. Many medical schools are now adopting…
Descriptors: Medical Schools, Incentives, Lifelong Learning, Competition
Kadhi, T.; Holley, D.; Beard, J. – Online Submission, 2011
The following report of descriptive statistics addresses the matriculating class of 2001-2007 according to their Law School Admission Council (LSAC) index. Generally, this report will offer information on the first time bar and ultimate performance on the Bar Exam of TMSL students. In addition, graduating GPA according to the LSAC index will also…
Descriptors: Grade Point Average, Law Schools, College Admission, Law Students
Peer reviewed Peer reviewed
Direct linkDirect link
Bramley, Tom – Educational Research, 2010
Background: A recent article published in "Educational Research" on the reliability of results in National Curriculum testing in England (Newton, "The reliability of results from national curriculum testing in England," "Educational Research" 51, no. 2: 181-212, 2009) suggested that: (1) classification accuracy can be…
Descriptors: National Curriculum, Educational Research, Testing, Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Roberts, William L.; McKinley, Danette W.; Boulet, John R. – Advances in Health Sciences Education, 2010
Due to the high-stakes nature of medical exams it is prudent for test agencies to critically evaluate test data and control for potential threats to validity. For the typical multiple station performance assessments used in medicine, it may take time for examinees to become comfortable with the test format and administrative protocol. Since each…
Descriptors: Student Evaluation, Pretests Posttests, Licensing Examinations (Professions), Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Judd, Wallace – Practical Assessment, Research & Evaluation, 2009
Over the past twenty years in performance testing a specific item type with distinguishing characteristics has arisen time and time again. It's been invented independently by dozens of test development teams. And yet this item type is not recognized in the research literature. This article is an invitation to investigate the item type, evaluate…
Descriptors: Test Items, Test Format, Evaluation, Item Analysis
Peer reviewed Peer reviewed
Dwyer, Carol Anne – Psychological Assessment, 1996
The uses and abuses of cut scores are examined. The article demonstrates (1) that cut scores always entail judgment; (2) that cut scores inherently result in misclassification; (3) that cut scores impose an artificial dichotomy on an essentially continuous distribution of knowledge, skill, or ability; and (4) that no true cut scores exist. (SLD)
Descriptors: Classification, Cutting Scores, Educational Testing, Error of Measurement
Woodruff, David J.; Sawyer, Richard L. – 1988
Two methods for estimating measures of pass-fail reliability are derived, by which both theta and kappa may be estimated from a single test administration. The methods require only a single test administration and are computationally simple. Both are based on the Spearman-Brown formula for estimating stepped-up reliability. The non-distributional…
Descriptors: Estimation (Mathematics), Licensing Examinations (Professions), Pass Fail Grading, Scores
Livingston, Samuel A. – 1981
The standard error of measurement (SEM) is a measure of the inconsistency in the scores of a particular group of test-takers. It is largest for test-takers with scores ranging in the 50 percent correct bracket; with nearly perfect scores, it is smaller. On tests used to make pass/fail decisions, the test-takers' scores tend to cluster in the range…
Descriptors: Error of Measurement, Estimation (Mathematics), Mathematical Formulas, Pass Fail Grading
Breyer, F. Jay; Lewis, Charles – 1994
A single-administration classification reliability index is described that estimates the probability of consistently classifying examinees to mastery or nonmastery states as if those examinees had been tested with two alternate forms. The procedure is applicable to any test used for classification purposes, subdividing that test into two…
Descriptors: Classification, Cutting Scores, Objective Tests, Pass Fail Grading
Schulz, E. Matthew; Wang, Lin – 2001
In this study, items were drawn from a full-length test of 30 items in order to construct shorter tests for the purpose of making accurate pass/fail classifications with regard to a specific criterion point on the latent ability metric. A three-item parameter Item Response Theory (IRT) framework was used. The criterion point on the latent ability…
Descriptors: Ability, Classification, Item Response Theory, Pass Fail Grading
Peer reviewed Peer reviewed
Longford, Nicholas T. – Journal of Educational and Behavioral Statistics, 1996
Data from two standard-setting exercises were analyzed using the logistic regression model that assumes no variation in severity of raters, and results were compared with those obtained by logistic regression that allowed for severity variation. Results illustrate the importance of taking between-rater differences into account. (SLD)
Descriptors: Cutting Scores, Decision Making, Evaluators, Individual Differences
Previous Page | Next Page ยป
Pages: 1  |  2  |  3