NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 2,881 to 2,895 of 4,794 results Save | Export
Peer reviewed Peer reviewed
Mitchelmore, M. C. – British Journal of Educational Psychology, 1981
This paper presents a scientific rationale for deciding the number of points to use on a grading scale in any given assessment situation. The rationale is applied to two common methods of assessment (multiple-choice and essay tests) and an example of a composite assessment. (Author/SJL)
Descriptors: Error of Measurement, Essay Tests, Grading, Higher Education
Peer reviewed Peer reviewed
Bajtelsmit, John W. – Educational and Psychological Measurement, 1979
A validational procedure was used, which involved a matrix of intercorrelations among tests reresenting four areas of Chartered Life Underwriter content knowledge, each measured by objective multiple-choice and essay methods. Results indicated that the two methods of measuring the same trait yielded fairly consistent estimates of content…
Descriptors: Essay Tests, Higher Education, Insurance Occupations, Multiple Choice Tests
Peer reviewed Peer reviewed
Hanna, Gerald S.; Oaster, Thomas R. – Educational and Psychological Measurement, 1980
Certain kinds of multiple-choice reading comprehension questions may be answered correctly at the higher-than-chance level when they are administered without the accompanying passage. These high risk questions do not necessarily lead to passage dependence invalidity. They threaten but do not prove invalidity. (Author/CP)
Descriptors: High Schools, Multiple Choice Tests, Reading Comprehension, Reading Tests
Peer reviewed Peer reviewed
Shaughnessy, John J. – Journal of Research in Personality, 1979
To determine the extent of students' confidence-judgment accuracy (CJA) and the relationship of this memory-monitoring ability to overall test performance, undergraduates in a psychology course supplied confidence-judgments along with their answers on multiple-choice test items. CJA correlated positively with test performance. (Editor/SJL)
Descriptors: Academic Achievement, College Students, Confidence Testing, Correlation
Baillargeon, Jarvis H. – School Shop, 1977
The author gives several reasons for students' varying abilities to take tests, then discusses how to write tests of different forms--essay questions, true-false questions, matching questions, completion questions--noting that the teacher who spends time writing good items will save time and effort in scoring. (HD)
Descriptors: Academic Achievement, Achievement Tests, Educational Testing, Essay Tests
Peer reviewed Peer reviewed
Newman, Dianna L.; And Others – Applied Measurement in Education, 1988
The effect of using statistical and cognitive item difficulty to determine item order on multiple-choice tests was examined, using 120 undergraduate students. Students performed better when items were ordered by increasing cognitive difficulty rather than decreasing difficulty. The statistical ordering of difficulty had little effect on…
Descriptors: Cognitive Tests, Difficulty Level, Higher Education, Multiple Choice Tests
Peer reviewed Peer reviewed
Potenza, Maria T.; Dorans, Neil J. – Applied Psychological Measurement, 1995
A classification scheme is presented for procedures to detect differential item functioning (DIF) for dichotomously scored items that is applicable to new DIF procedures for polytomously scored items. A formal development of a polytomous version of a dichotomous DIF technique is presented. (SLD)
Descriptors: Classification, Evaluation Methods, Identification, Item Bias
Peer reviewed Peer reviewed
Thissen, David; And Others – Applied Psychological Measurement, 1995
Methods are described, based on item response theory, that provide scaled scores, or estimates of trait level, for each summed score for rated responses or for combinations of rated responses and multiple-choice items. These useful methods avoid problems associated with response-pattern scoring. (SLD)
Descriptors: Constructed Response, Estimation (Mathematics), Item Response Theory, Multiple Choice Tests
Peer reviewed Peer reviewed
Huffman, Douglas; Heller, Patricia – Physics Teacher, 1995
The Force Concept Inventory (FCI) is a 29-question, multiple-choice test designed to assess students' Newtonian and non-Newtonian conceptions of force. Presents an analysis of FCI results as one way to determine what the inventory actually measures. (LZ)
Descriptors: Evaluation Methods, Force, Multiple Choice Tests, Physics
Peer reviewed Peer reviewed
Carlson, J. Lon; Ostrosky, Anthony L. – Journal of Economic Education, 1992
Discusses effects of test question order on student performance. Addresses (1) differences in the distribution of scores on each form of the examination; (2) effects on the validity of individual examination items; and (3) effects on the reliability of the examination instrument. Concludes that distribution of examination scores may be influenced…
Descriptors: Economics Education, Evaluation Research, Higher Education, Multiple Choice Tests
Peer reviewed Peer reviewed
McDaniel, Mark A.; And Others – Contemporary Educational Psychology, 1994
Two experiments with 112 college students investigated how subjects might modulate their reading strategies as a function of how they expect to be tested. Test-expectancy subjects, regardless of the test expected, are more apt to identify and focus on important information than are subjects without a specific test expectancy. (SLD)
Descriptors: Cognitive Processes, College Students, Essays, Expectation
Peer reviewed Peer reviewed
Hampton, David R.; And Others – Journal of Education for Business, 1993
Four management and four marketing professors classified multiple-choice questions in four widely adopted introductory textbooks according to the two levels of Bloom's taxonomy of educational objectives: knowledge and intellectual ability and skill. Inaccuracies may cause instructors to select questions that require less thinking than they intend.…
Descriptors: Administrator Education, Case Studies, Higher Education, Marketing
Peer reviewed Peer reviewed
Geiger, Marshall A. – Journal of Experimental Education, 1997
Relationships between multiple-choice test answer changing and testwiseness skills, and between these two variables and examination performance were studied with 150 college business students. Answer-changing behavior was related to multiple-choice test performance but not to testwiseness or performance on the nonmultiple choice portion.…
Descriptors: Business Education, College Students, Higher Education, Multiple Choice Tests
Peer reviewed Peer reviewed
Burton, Richard F.; Miller, David J. – Assessment & Evaluation in Higher Education, 1999
Discusses statistical procedures for increasing test unreliability due to guessing in multiple choice and true/false tests. Proposes two new measures of test unreliability: one concerned with resolution of defined levels of knowledge and the other with the probability of examinees being incorrectly ranked. Both models are based on the binomial…
Descriptors: Guessing (Tests), Higher Education, Multiple Choice Tests, Objective Tests
Peer reviewed Peer reviewed
Cizek, Gregory J.; Robinson, K. Lynne; O'Day, Denis M. – Educational and Psychological Measurement, 1998
The effect of removing nonfunctioning items from multiple-choice tests was studied by examining change in difficulty, discrimination, and dimensionality. Results provide additional support for the benefits of eliminating nonfunctioning options, such as enhanced score reliability, reduced testing time, potential for broader domain sampling, and…
Descriptors: Difficulty Level, Multiple Choice Tests, Sampling, Scores
Pages: 1  |  ...  |  189  |  190  |  191  |  192  |  193  |  194  |  195  |  196  |  197  |  ...  |  320