NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)1
Since 2006 (last 20 years)2
Audience
Researchers2
Location
Hawaii1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 17 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Herrmann-Abell, Cari F.; DeBoer, George E. – Grantee Submission, 2016
Energy is a core concept in the teaching of science. Therefore, it is important to know how students' thinking about energy develops so that elementary, middle, and high school students can be appropriately supported in their understanding of energy. This study tests the validity of a proposed theoretical model of students' growth of understanding…
Descriptors: Item Response Theory, Science Tests, Scientific Concepts, Energy
Engelhard, George, Jr.; Wind, Stefanie A. – College Board, 2013
The major purpose of this study is to examine the quality of ratings assigned to CR (constructed-response) questions in large-scale assessments from the perspective of Rasch Measurement Theory. Rasch Measurement Theory provides a framework for the examination of rating scale category structure that can yield useful information for interpreting the…
Descriptors: Measurement Techniques, Rating Scales, Test Theory, Scores
Crislip, Marian A.; Chin-Chance, Selvin – 2001
This paper discusses the use of two theories of item analysis and test construction, their strengths and weaknesses, and applications to the design of the Hawaii State Test of Essential Competencies (HSTEC). Traditional analyses of the data collected from the HSTEC field test were viewed from the perspectives of item difficulty levels and item…
Descriptors: Difficulty Level, Item Response Theory, Psychometrics, Reliability
Scrams, David J.; Schnipke, Deborah L. – 1997
Response accuracy and response speed provide separate measures of performance. Psychometricians have tended to focus on accuracy with the goal of characterizing examinees on the basis of their ability to respond correctly to items from a given content domain. With the advent of computerized testing, response times can now be recorded unobtrusively…
Descriptors: Computer Assisted Testing, Difficulty Level, Item Response Theory, Psychometrics
Schnipke, Deborah L.; Reese, Lynda M. – 1997
Two-stage and multistage test designs provide a way of roughly adapting item difficulty to test-taker ability. All test takers take a parallel stage-one test, and, based on their scores, they are routed to tests of different difficulty levels in subsequent stages. These designs provide some of the benefits of standard computerized adaptive testing…
Descriptors: Ability, Adaptive Testing, Algorithms, Comparative Analysis
Lieberman, Marcus – 1973
When subjects are given open-ended stimulus situations allowing for responses at all possible stages of a developmental theory, the stage scores of individuals for each situation can be treated as scores on polychotomous items. Extensions of the concepts of difficulty and discriminating power from the dichotomous case to this ordinal category…
Descriptors: Abstract Reasoning, Behavior Theories, Developmental Psychology, Difficulty Level
Smith, Richard M. – 1982
There have been many attempts to formulate a procedure for extracting information from incorrect responses to multiple choice items, i.e., the assessment of partial knowledge. The results of these attempts can be described as inconsistent at best. It is hypothesized that these inconsistencies arise from three methodological problems: the…
Descriptors: Difficulty Level, Evaluation Methods, Goodness of Fit, Guessing (Tests)
Choppin, Bruce H. – 1983
In the answer-until-correct mode of multiple-choice testing, respondents are directed to continue choosing among the alternatives to each item until they find the correct response. There is no consensus as to how to convert the resulting pattern of responses into a measure because of two conflicting models of item response behavior. The first…
Descriptors: Computer Assisted Testing, Difficulty Level, Guessing (Tests), Knowledge Level
Skaggs, Gary; Bourque, Mary Lyn – 1998
Political and legislative pressures have posed a number of measurement issues and challenges to the development of sound, valid voluntary national tests (VNTs). This paper focuses on what appear to be the most difficult technical issues related to the VNT proposed by President Clinton in 1997. Technical issues refer to psychometric issues, as…
Descriptors: Academic Achievement, Achievement Tests, Classification, Difficulty Level
Drasgow, Fritz; Parsons, Charles K. – 1982
The effects of a multidimensional latent trait space on estimation of item and person parameters by the computer program LOGIST are examined. Several item pools were simulated that ranged from truly unidimensional to an inconsequential general latent trait. Item pools with intermediate levels of prepotency of the general latent trait were also…
Descriptors: Computer Simulation, Computer Software, Difficulty Level, Item Analysis
Peters, Lawrence H.; Rudolf, Cathy J. – 1982
The literature on the partial determinants of performance in organizational contexts, particularly the research on situational determinants, suggests several different variables which may be of importance to appraisal processes as well. This point may be exemplified with regard to the situational factors of task ease/difficulty and situational…
Descriptors: Difficulty Level, Employees, Employers, Evaluation Criteria
Peer reviewed Peer reviewed
Fitzpatrick, Anne R.; Yen, Wendy M. – Journal of Educational Measurement, 1995
The psychometric characteristics of constructed response items referring to choice and nonchoice passages administered to approximately 150,000 students in grades 3, 5, and 8 were studied through item response theory methodology. Results indicated no consistent differences in the difficulty and discrimination of items referring to the two types of…
Descriptors: Constructed Response, Difficulty Level, Elementary Education, Elementary School Students
Kromrey, Jeffrey D.; Bacon, Tina P. – 1992
A Monte Carlo study was conducted to estimate the small sample standard errors and statistical bias of psychometric statistics commonly used in the analysis of achievement tests. The statistics examined in this research were: (1) the index of item difficulty; (2) the index of item discrimination; (3) the corrected item-total point-biserial…
Descriptors: Achievement Tests, Comparative Analysis, Difficulty Level, Estimation (Mathematics)
Wise, Lauress L. – 1986
A primary goal of this study was to determine the extent to which item difficulty was related to item position and, if a significant relationship was found, to suggest adjustments to predicted item difficulty that reflect differences in item position. Item response data from the Medical College Admission Test (MCAT) were analyzed. A data set was…
Descriptors: College Entrance Examinations, Difficulty Level, Educational Research, Error of Measurement
Smith, Richard M. – 1983
Measurement disturbances, such as guessing, startup, and plodding, often result in an examinee's ability being either over- or under-estimated by the maximum likelihood estimation employed in latent trait psychometric models. Several authors have suggested methods to lessen the impact of unexpected responses on the ability estimation process. This…
Descriptors: Difficulty Level, Error of Measurement, Estimation (Mathematics), Goodness of Fit
Previous Page | Next Page ยป
Pages: 1  |  2