NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 5,431 to 5,445 of 9,554 results Save | Export
Peer reviewed Peer reviewed
Zickar, Michael J.; Ury, Karen L. – Educational and Psychological Measurement, 2002
Attempted to relate content features of personality items to item parameter estimates from the partial credit model of E. Muraki (1990) by administering the Adjective Checklist (L. Goldberg, 1992) to 329 undergraduates. As predicted, the discrimination parameter was related to the item subtlety ratings of personality items but the level of word…
Descriptors: Correlation, Estimation (Mathematics), Higher Education, Personality Assessment
Peer reviewed Peer reviewed
Wainer, Howard – Educational Measurement: Issues and Practice, 1999
Discusses the comparison of groups of individuals who were administered different forms of a test. Focuses on the situation in which there is little overlap in content between the test forms. Reviews equating problems in national tests in Canada and Israel. (SLD)
Descriptors: Comparative Analysis, Equated Scores, Foreign Countries, National Competency Tests
Peer reviewed Peer reviewed
Joughin, Gordon – Assessment & Evaluation in Higher Education, 1998
Analysis of literature on oral assessment in college instruction identified six dimensions: primary content type; interaction between examiner and learner; authenticity of assessment task; structure of assessment task; examiner; and orality (extent to which knowledge is tested orally). These help in understanding the nature of oral assessment and…
Descriptors: College Instruction, Higher Education, Student Evaluation, Test Format
Peer reviewed Peer reviewed
Arnau, Randolph C.; Thompson, Russel L.; Cook, Colleen – Educational and Psychological Measurement, 2001
Used two different coherent cut kinetics taxonomic procedures to examine the latent structure of responses to a survey of library service quality using an unnumbered slider-bar user interface and a radio-button user interface. Results for 354 college students show that both interfaces yield similar latent structures of survey item responses. (SLD)
Descriptors: College Students, Higher Education, Responses, Surveys
Johnson, Carol; Vanneman, Alan – Education Statistics Quarterly, 2001
Describes the performance of eighth graders on 37 questions from the National Assessment of Educational Progress 1998 Civics Assessment and indicates percentages of students answering given questions correctly. Includes samples of students' written responses. (Author/SLD)
Descriptors: Citizenship Education, Civics, Grade 8, Junior High School Students
Peer reviewed Peer reviewed
Eid, Ghada K. – Education, 2005
In educational settings, achievement is one of the most significant factors for assessing the effectiveness of the educational institution, being the criterion used to evaluate the performance of individuals and institutions. The score obtained from an achievement test provides primarily two types of information: one is the degree to which the…
Descriptors: Test Items, Standardized Tests, Item Response Theory, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Revuelta, Javier – Psychometrika, 2005
Complete response vectors of all answer options in multiple-choice items can be used to estimate ability. The rising selection ratios criterion is necessary for scoring individuals because it implies that estimated ability always increases when the correct alternative is selected. This paper introduces the generalized DLT model, which assumes…
Descriptors: Multiple Choice Tests, Simulation, Item Response Theory, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Hammerman, Elizabeth – Science Scope, 2005
In this article the author presents a model that shows how inquiry-based instruction and creative classroom assessment can be used to teach the concepts and principles on which standardized test items are based. To successfully implement this model, the teacher must have a clear understanding of: (1) The ways that the NSES link to instruction and…
Descriptors: Teaching Methods, Test Items, Standardized Tests, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Doran, Harold C. – Educational and Psychological Measurement, 2005
The information function is an important statistic in item response theory (IRT) applications. Although the information function is often described as the IRT version of reliability, it differs from the classical notion of reliability from a critical perspective: replication. This article first explores the information function for the…
Descriptors: Item Response Theory, Error of Measurement, Evaluation Methods, Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Bryant, Damon U.; Wooten, William – International Journal of Testing, 2006
The purpose of this study was to demonstrate how cognitive and measurement principles can be integrated to create an essentially unidimensional test. Two studies were conducted. In Study 1, test questions were created by using the feature integration theory of attention to develop a cognitive model of performance and then manipulating complexity…
Descriptors: Test Construction, Cognitive Measurement, Theories, Attention
Peer reviewed Peer reviewed
Wapnick, Joel; Ryan, Charlene; Campbell, Louise; Deek, Patricia; Lemire, Renata; Darrow, Alice-Ann – Journal of Research in Music Education, 2005
The purpose of this study was to determine how judgments of solo performances recorded at an international piano competition might be affected by excerpt duration (20 versus 60 seconds) and tempo (slow versus fast). Musicians rated performances on six test items. Results indicated that piano majors rated slow excerpts higher than they rated fast…
Descriptors: Music Activities, Musicians, Music Education, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Si, Ching-Fung; Schumacker, Randall E. – International Journal of Testing, 2004
Testing is essential in education and other social science fields because many assessments, decisions, and policies are made according to the results of testing. The purpose of testing is to estimate a person's ability, that is, latent trait or construct. In a test setting, responses to a set of test items by each individual are recorded. Through…
Descriptors: Test Items, Scoring, Testing, Cognitive Ability
Peer reviewed Peer reviewed
Direct linkDirect link
Hagtvet, Knut A.; Nasser, Fadia M. – Structural Equation Modeling, 2004
This article presents a methodology for examining the content and nature of item parcels as indicators of a conceptually defined latent construct. An essential component of this methodology is the 2-facet measurement model, which includes items and parcels as facets of construct indicators. The 2-facet model tests assumptions required for…
Descriptors: Evaluation Methods, Validity, Test Anxiety, Content Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Gierl, Mark J.; Gotzmann, Andrea; Boughton, Keith A. – Applied Measurement in Education, 2004
Differential item functioning (DIF) analyses are used to identify items that operate differently between two groups, after controlling for ability. The Simultaneous Item Bias Test (SIBTEST) is a popular DIF detection method that matches examinees on a true score estimate of ability. However in some testing situations, like test translation and…
Descriptors: True Scores, Simulation, Test Bias, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
MacCann, Robert G. – Psychometrika, 2004
For (0, 1) scored multiple-choice tests, a formula giving test reliability as a function of the number of item options is derived, assuming the "knowledge or random guessing model," the parallelism of the new and old tests (apart from the guessing probability), and the assumptions of classical test theory. It is shown that the formula is a more…
Descriptors: Guessing (Tests), Multiple Choice Tests, Test Reliability, Test Theory
Pages: 1  |  ...  |  359  |  360  |  361  |  362  |  363  |  364  |  365  |  366  |  367  |  ...  |  637