Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 1 |
Descriptor
Test Format | 24 |
Test Reliability | 24 |
Test Validity | 17 |
Test Construction | 9 |
Evaluation Methods | 6 |
Higher Education | 5 |
Test Items | 5 |
Test Reviews | 5 |
Academic Achievement | 4 |
Reading Tests | 4 |
Student Evaluation | 4 |
More ▼ |
Source
Author
Publication Type
Education Level
Audience
Practitioners | 3 |
Teachers | 2 |
Location
Laws, Policies, & Programs
Assessments and Surveys
Minnesota Multiphasic… | 1 |
What Works Clearinghouse Rating
McGaw, Barry – Assessment in Education: Principles, Policy & Practice, 2008
In their reactions to my paper, the four authors provide comments that are illuminating and helpful for continuing discussions of the nature and utility of quantitative, comparative, international studies of educational achievement. In this response, I comment further on the issues of test characteristics, sample design, culture and causation.
Descriptors: Test Format, International Studies, Academic Achievement, Evaluation
Hoachlander, E. Gareth – Techniques: Making Education and Career Connections, 1998
Discusses state testing, various types of tests, and whether the increased attention to assessment is contributing to improved student learning. Describes uses of standardized multiple-choice, open-ended constructed response, essay, performance event, and portfolio methods. (JOW)
Descriptors: Academic Achievement, Student Evaluation, Test Format, Test Reliability

Streiner, David L.; Miller, Harold R. – Journal of Clinical Psychology, 1986
Numerous short forms of the Minnesota Multiphasic Personality Inventory have been proposed in the last 15 years. In each case, the initial enthusiasm has been replaced by the questions about the clinical utility of the abbreviated version. Argues that the statistical properties of the test and reduced reliability due to shortening the scales…
Descriptors: Test Construction, Test Format, Test Length, Test Reliability

Chambers, William V. – Social Behavior and Personality, 1985
Personal construct psychologists have suggested various psychological functions explain differences in the stability of constructs. Among these functions are constellatory and loose construction. This paper argues that measurement error is a more parsimonious explanation of the differences in construct stability reported in these studies. (Author)
Descriptors: Error of Measurement, Test Construction, Test Format, Test Reliability
Fishman, Judith – Writing Program Administration, 1984
Examines the CUNY-WAT program and questions many aspects of it, especially the choice and phrasing of topics. (FL)
Descriptors: Essay Tests, Higher Education, Test Format, Test Items
Ebel, Robert L. – 1981
An alternate-choice test item is a simple declarative sentence, one portion of which is given with two different wordings. For example, "Foundations like Ford and Carnegie tend to be (1) eager (2) hesitant to support innovative solutions to educational problems." The examinee's task is to choose the alternative that makes the sentence…
Descriptors: Comparative Testing, Difficulty Level, Guessing (Tests), Multiple Choice Tests

Putnam, Lillian R. – Journal of Reading, 1986
Criticizes the Detroit Tests of Learning Aptitude 2 (DLTA-2): (1) scoring criteria for the Story Construction Test are questionable; (2) the Word Fragment Test may not be practically significant; (3) the Picture Book is inconvenient to use without an index or table of contents. One major strength is the provision for combining subtest scores. (SRT)
Descriptors: Aptitude Tests, Intelligence Tests, Learning Processes, Scores

Kolstad, Rosemarie; And Others – Journal of Dental Education, 1982
Nonrestricted-answer, multiple-choice test items are recommended as a way of including more facts and fewer incorrect answers in test items, and they do not cue successful guessing as restricted multiple choice items can. Examination construction, scoring, and reliability are discussed. (MSE)
Descriptors: Guessing (Tests), Higher Education, Item Analysis, Multiple Choice Tests

Greenberg, Karen L. – WPA: Writing Program Administration, 1992
Elaborates on and responds to challenges of direct writing assessment. Speculates on future directions in writing assessment. Suggests that, if writing instructors accept that writing is a multidimensional, situational construct that fluctuates across a wide variety of contexts, then they must also respect the complexity of teaching and testing…
Descriptors: Essay Tests, Higher Education, Multiple Choice Tests, Test Format
Brittain, Mary M.; Brittain, Clay V. – 1981
A behavioral domain is well-defined when it is clear to both test developers and test users which categories of performance should or should not be considered for potential test items. Only those tests that are keyed to well-defined domains meet the definition of criterion-referenced tests. The greatest proliferation of criterion-referenced tests…
Descriptors: Criterion Referenced Tests, Reading Achievement, Reading Tests, Test Construction

Drain, Susan; Manos, Kenna – English Quarterly, 1986
Reviews a writing abilities competency test based on samples of essay writing. A copy of the test is appended. (NKA)
Descriptors: Essays, Higher Education, Language Tests, Test Construction

Plastre, Guy – Canadian Modern Language Review, 1981
Starting from the premise that the assessment of second language learners' competence is a must at several points during the learning process, discusses the usefulness of laboratory testing. Argues that it allows for standardized testing, facility of test administration, easy scoring, objective measures, reliability and validity of results. (MES)
Descriptors: English, French, Language Laboratories, Scoring
Martin, Randy – 1988
Reasons for administering tests fall into two categories--decision-making and promoting learning. The two bases of tests are learning objectives and the level of learning at which training is developed. Test development involves a number of steps. The best way to tie objectives to test items is through the use of a table of specifications, which…
Descriptors: Elementary Secondary Education, Item Analysis, Item Banks, Postsecondary Education
Kemerer, Richard; Wahlstrom, Merlin – Performance and Instruction, 1985
Compares the features, learning outcomes tested, reliability, viability, and cost effectiveness of essay tests with those of interpretive tests used in training programs. A case study illustrating how an essay test was converted to an interpretive test and pilot tested is included to illustrate the advantages of interpretive testing. (MBR)
Descriptors: Case Studies, Comparative Analysis, Cost Effectiveness, Essay Tests
Shrock, Sharon A.; Foshay, Wellesley R. – Performance and Instruction, 1984
Discusses methods of sampling the best information from instruction/training developers/candidates for professional certification and examines the problems of interpreting that information and making classification decisions. Assessment strategies including criterion-referenced, multiple-choice, short answer, and essay questions, and portfolio…
Descriptors: Certification, Competence, Criterion Referenced Tests, Instructional Development
Previous Page | Next Page ยป
Pages: 1 | 2