NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 16 to 30 of 146 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Judd, Wallace – Practical Assessment, Research & Evaluation, 2009
Over the past twenty years in performance testing a specific item type with distinguishing characteristics has arisen time and time again. It's been invented independently by dozens of test development teams. And yet this item type is not recognized in the research literature. This article is an invitation to investigate the item type, evaluate…
Descriptors: Test Items, Test Format, Evaluation, Item Analysis
Hsieh, Ching-Ni – ProQuest LLC, 2011
Second language (L2) oral performance assessment always involves raters' subjective judgments and is thus subject to rater variability. The variability due to rater characteristics has important consequential impacts on decision-making processes, particularly in high-stakes testing situations (Bachman, Lynch, & Mason, 1995; A. Brown, 1995;…
Descriptors: Undergraduate Students, Phonology, Teaching Assistants, Foreign Students
Salmani Nodoushan, Mohammad Ali – Online Submission, 2008
Over the past few decades, educators in general, and language teachers in specific, were more inclined towards using testing techniques that resembled real-life language performance. Unlike traditional paper-and-pencil language tests that required test-takers to attempt tests that were based on artificial and contrived language content,…
Descriptors: Portfolios (Background Materials), Portfolio Assessment, Performance Based Assessment, Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Grenwelge, Cheryl H. – Journal of Psychoeducational Assessment, 2009
The Woodcock Johnson III Brief Assessment is a "maximum performance test" (Reynolds, Livingston, Willson, 2006) that is designed to assess the upper levels of knowledge and skills of the test taker using both power and speed to obtain a large amount of information in a short period of time. The Brief Assessment also provides an adequate…
Descriptors: Test Results, Knowledge Level, Testing, Performance Tests
Manpower Administration (DOL), Washington, DC. U.S. Training and Employment Service. – 1969
To compare the live method of administering dictation tests with the recorded method, the United States Training and Employment Service (USTES) conducted studies in cooperation with the State Employment Services of Alabama, Colorado, Minnesota, Mississippi, New York, and Utah. The procedures were similar in each state, with samples being broken…
Descriptors: Comparative Analysis, Comparative Testing, Evaluation, Performance Tests
Toler, Wilma M. – Journal of Business Education, 1973
Students should be continuously evaluated on straight copy timings, technique, and work attitudes and habits in typing. (AG)
Descriptors: Business Education, Grading, Performance Criteria, Performance Tests
Frey, Bruce B.; Schmitt, Vicki L. – Journal of Advanced Academics, 2007
As the field of education moves forward in the area of assessment, researchers have yet to come to a conclusion about definitions of commonly used terms. Without a consensus on the use of fundamental terms, it is difficult to engage in meaningful discourse within the field of assessment, as well as to conduct research on and communicate about best…
Descriptors: Performance Tests, Formative Evaluation, Performance Based Assessment, Teacher Made Tests
Matrix Research Co., Alexandria, VA. – 1973
The handbook covers a comprehensive series of Job-Task Performance Tests for the Doppler Radar (AN/APN) and its Associated Computer (AN/ASN-35). The test series has been developed to measure job performance of the electronic technician. These tests encompass all phases of day-to-day preventative and corrective maintenance that technicians are…
Descriptors: Electronic Technicians, Guides, Performance Tests, Radar
Denton, Jon J.; Crowley, Lee B. – Southern Journal of Educational Research, 1978
Using 68 students in professional education coursework, the study determined whether accomplishment of performance objectives determined by satisfactory performance on unit tests during instruction was related to the student's performance on an end-of-course retention test. Evidence supported the assignment of grade credit based on objective…
Descriptors: Academic Achievement, Grading, Individualized Instruction, Mastery Learning
Peer reviewed Peer reviewed
McGilligan, Robert P.; And Others – Psychology in the Schools, 1971
Descriptors: Art Education, Performance Factors, Performance Tests, Test Reliability
Osborn, William C. – 1977
Four essential dimensions of a performance test are detailed: directness of test method, type of criterion, standardization of conditions, and objectivity of scoring. For simplicity these factors are described as if each were dichotomous, when in actuality each is a continuum; a test method may be more or less direct, conditions more or less…
Descriptors: Performance Tests, Scoring, Test Reliability, Test Validity
Manpower Administration (DOL), Washington, DC. U.S. Training and Employment Service. – 1969
To compare the reliability of performance on recorded dictation tests with performance on live tests, 216 university students who were nearing completion of an intermediate shorthand course and 26 job applicants seeking stenographic positions were divided into 10 groups, with five receiving live dictation and five receiving recorded dictation. The…
Descriptors: Comparative Analysis, Comparative Testing, Evaluation, Performance Tests
Bucky, Steven F.; And Others – 1970
Measures of state and trait anxiety were given to aviation officer candidates (AOC's) with the usual instructions as well as with instructions to answer as if each had just made his first landing on an aircraft carrier. Significant differences were sought when comparing the experimental group to college students. (Author)
Descriptors: Anxiety, Comparative Analysis, Flight Training, Military Personnel
Wasik, John L.; Wasik, Barbara H. – Measurement and Evaluation in Guidance, 1972
Between-scale correlations indicate that both tests provide similar measures of intellectual performance, but because of the discrepancy in mean IQ scores of the WPPSI and WISC, comparisons should be confined to within-test contrasts. (Author)
Descriptors: Disadvantaged, Intelligence Tests, Measurement, Performance Tests
Nunn, Colin – Training Officer, 1976
The components of a system for practical skills assessment or trade testing is discussed. The system, called Progressive Assessment Testing (PAT), is used in the normal workshop situation and enables the employer to identify and correct weaknesses in performance at the appropriate point in the training process. (Author/EC)
Descriptors: Foreign Countries, Job Training, Occupational Tests, Performance Tests
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10