NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 8,011 to 8,025 of 10,088 results Save | Export
Peer reviewed Peer reviewed
Hanna, Gerald S.; And Others – Journal of School Psychology, 1981
Discusses four ubiquitous major sources of measurement error for individual intelligence scales. Argues that where these sources cannot be directly investigated, they should be estimated rather than ignored. Estimated the typical magnitude of error arising from each of content sampling, time sampling, scoring, and administration. (Author)
Descriptors: Error of Measurement, Intelligence Tests, Measurement Techniques, Sampling
Peer reviewed Peer reviewed
Plake, Barbara S.; And Others – Journal of Experimental Education, 1981
Number right and elimination scores were analyzed on a college level mathematics exam assembled from pretest data. Anxiety measures were administered along with the experimental forms to undergraduates. Results suggest that neither test scores nor attitudes are influenced by item order knowledge thereof, or anxiety level. (Author/GK)
Descriptors: College Mathematics, Difficulty Level, Higher Education, Multiple Choice Tests
Edwards, Dee – Assessment in Higher Education, 1979
A research project on the variability of tutor grading of student assignments at the Open University in Great Britain is reported. Factors in the variation that became apparent through discussion, questionnaires, and examination of papers are explained and areas for research are suggested. (MSE)
Descriptors: College Faculty, Evaluation Criteria, Foreign Countries, Formative Evaluation
Peer reviewed Peer reviewed
Bohning, Gerry – Psychology in the Schools, 1980
An item analysis profile sheet to accompany the Slosson Intelligence Test (SIT) is helpful in providing a functional test interpretation. The lack of recorded technical and statistical information is a serious concern. Without such information, a practitioner could not use the Item Analysis of SIT with confidence. (Author)
Descriptors: Children, Educational Diagnosis, Elementary Secondary Education, Intelligence Tests
Peer reviewed Peer reviewed
Bliss, Leonard B. – Journal of Educational Measurement, 1980
A mathematics achievement test with instructions to avoid guessing wildly was given to 168 elementary school pupils who were later asked to complete all the questions using a differently colored pencil. Results showed examinees, particularly the more able students, tend to omit too many items. (CTM)
Descriptors: Anxiety, Guessing (Tests), Intermediate Grades, Multiple Choice Tests
Peer reviewed Peer reviewed
Tinsley, Howard E. A.; Kass, Richard A. – Educational and Psychological Measurement, 1980
Hit rates obtained in two cross-validation administrations of the Leisure Activity Questionnaire (LAQ) and the Paragraphs about Leisure (PAL) were determined. It was concluded that use of PAL, with results reported in terms of factor scores, is in the most valid and parsimonious measurement strategy of those investigated. (Author/BW)
Descriptors: Affective Measures, Discriminant Analysis, Factor Structure, Higher Education
Harty, Harold – New Directions for Institutional Advancement, 1979
The mail questionnaire is seen as a risky data- and information-gathering instrument but an effective way to tap constituent attitudes. Knowing when and how to use it is considered crucial in institutional advancement in academe. Goals and objectives, design assumptions, and types of questions and response formats are discussed. (Author/MLW)
Descriptors: Administration, Costs, Data Collection, Field Tests
Peer reviewed Peer reviewed
Bennett, Randy Elliot; Steffen, Manfred; Singley, Mark Kevin; Morley, Mary; Jacquemin, Daniel – Journal of Educational Measurement, 1997
Scoring accuracy and item functioning were studied for an open-ended response type test in which correct answers can take many different surface forms. Results with 1,864 graduate school applicants showed automated scoring to approximate the accuracy of multiple-choice scoring. Items functioned similarly to other item types being considered. (SLD)
Descriptors: Adaptive Testing, Automation, College Applicants, Computer Assisted Testing
Peer reviewed Peer reviewed
Stahl, John; Shumway, Rebecca; Bergstrom, Betty; Fisher, Anne – Journal of Outcome Measurement, 1997
The development of an online performance assessment instrument, the Assessment of Motor and Process Skills, is reported. Issues addressed include development, implementation, and validation of the scoring rubric in an extended Rasch model, rater training, and implementation of the assessment in a computerized program. (SLD)
Descriptors: Computer Assisted Testing, Item Response Theory, Online Systems, Performance Based Assessment
Peer reviewed Peer reviewed
Solano-Flores, Guillermo; Shavelson, Richard J. – Educational Measurement: Issues and Practice, 1997
Conceptual, practical, and logistical issues in the development of science performance assessments (SPAs) are discussed. The conceptual framework identifies task, response format, and scoring system as components, and conceives of SPAs as tasks that attempt to recreate conditions in which scientists work. Developing SPAs is a sophisticated effort…
Descriptors: Elementary Secondary Education, Performance Based Assessment, Science Education, Science Tests
Peer reviewed Peer reviewed
Norcini, John J.; Shea, Judy A. – Applied Measurement in Education, 1997
The major forms of evidence that support a standard's credibility are reviewed, and what can be done over time and for different forms of an examination to enhance its comparability in a credentialing setting is outlined. Pass-fail decisions must be consistent to ensure a standard's credibility. (SLD)
Descriptors: Certification, Comparative Analysis, Credentials, Credibility
Peer reviewed Peer reviewed
Oxford, Rebecca L. – Applied Language Learning, 1996
Discusses the validity of the most widely employed second language learning strategy questionnaire, the English-as-a-Second-Language/English-as-a-Foreign-Language version of the Strategy Inventory for Language Learning). Details appropriate uses and limitations of questionnaires for strategy assessment. (80 references) (Author/CK)
Descriptors: Cognitive Style, English (Second Language), Learning Strategies, Measurement Techniques
Peer reviewed Peer reviewed
Wyngaard, Sandra; Gehrke, Rachel – English Journal, 1996
Puts forth a plan for raising student consciousness about audience through peer evaluation sessions. Targets three areas of effective autobiographical writing: an engaging opening; focus; and showing, not telling. Discusses grading rubrics for these categories appropriate for student readers. (TB)
Descriptors: Audience Awareness, Autobiographies, Peer Evaluation, Personal Narratives
Peer reviewed Peer reviewed
Hunter, Darryl M.; And Others – Canadian Journal of Program Evaluation/La Revue canadienne d'evaluation de programme, 1996
Discusses issues associated with the use of holistic and analytic scoring methods in large-scale student assessments. Reviews the education literature and describes the recent Saskatchewan study comparing the results of the two scoring approaches applied to an achievement test taken by about 1,600 third graders. (SLD)
Descriptors: Achievement Tests, Comparative Analysis, Educational Research, Elementary Education
Peer reviewed Peer reviewed
Purcell, Jeanne H.; Burns, Deborah E.; Tomlinson, Carol Ann; Imbeau, Marcia B.; Martin, Judith L. – Gifted Child Quarterly, 2002
This article includes information about the development of a rubric originally designed to assess the quality of curricular units that are submitted annually to the National Association for Gifted Children Curriculum Division's Curriculum Competition. Information about four different, but related, uses for the rubric is provided. (Contains…
Descriptors: Curriculum Design, Curriculum Development, Elementary Secondary Education, Evaluation Methods
Pages: 1  |  ...  |  531  |  532  |  533  |  534  |  535  |  536  |  537  |  538  |  539  |  ...  |  673