NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 6,256 to 6,270 of 9,533 results Save | Export
Peer reviewed Peer reviewed
Johanson, George A.; And Others – Evaluation Review, 1993
The tendency of some respondents to omit items more often when they feel they have a less positive evaluation to make and less frequently when the evaluation is more positive is discussed. Five examples illustrate this form of nonresponse bias. Recommendations to overcome nonresponse bias are offered. (SLD)
Descriptors: Estimation (Mathematics), Evaluation Methods, Questionnaires, Response Style (Tests)
Peer reviewed Peer reviewed
Cunningham, James W.; Moore, David W. – Journal of Reading Behavior, 1993
Investigates whether the vocabulary of written comprehension questions is an independent factor in determining students' reading comprehension performance. Finds that academic vocabulary in comprehension questions significantly decreased question-answering performance. Computes simple, multiple, and semipartial correlations between vocabulary…
Descriptors: Academic Discourse, Correlation, Intermediate Grades, Reading Comprehension
Llaneras, Robert E.; And Others – Performance and Instruction, 1993
Presents a job aid for determining test-item format called TIFAID (Test Item Format Job Aid), based on adequately constructed instructional objectives. The four sections of the job aid are described: (1) a task classification system; (2) task-related questions; (3) a flowchart; and (4) a tips and techniques guide. (Contains four references.) (LRW)
Descriptors: Classification, Educational Objectives, Evaluation Methods, Flow Charts
Peer reviewed Peer reviewed
Noe, Francis P.; Snow, Rob – Journal of Environmental Education, 1990
Examined were the responses of park visitors to the New Environmental Paradigm scale. Research methods, and results including reliabilities and factor analysis of the scales on the survey are discussed. (CW)
Descriptors: Community Attitudes, Conservation (Environment), Environmental Education, Postsecondary Education
Peer reviewed Peer reviewed
Armstrong, Ronald D.; And Others – Journal of Educational Statistics, 1994
A network-flow model is formulated for constructing parallel tests based on classical test theory while using test reliability as the criterion. Practitioners can specify a test-difficulty distribution for values of item difficulties as well as test-composition requirements. An empirical study illustrates the reliability of generated tests. (SLD)
Descriptors: Algorithms, Computer Assisted Testing, Difficulty Level, Item Banks
Peer reviewed Peer reviewed
Nandakumar, Ratna – Journal of Educational Measurement, 1993
The phenomenon of simultaneous differential item functioning (DIF) amplification and cancellation and the role of the SIBTEST approach in detecting DIF are investigated with a variety of simulated test data. The effectiveness of SIBTEST is supported, and the implications of DIF amplification and cancellation are discussed. (SLD)
Descriptors: Computer Simulation, Elementary Secondary Education, Equal Education, Equations (Mathematics)
Peer reviewed Peer reviewed
Stocking, Martha L.; Swanson, Len – Applied Psychological Measurement, 1993
A method is presented for incorporating a large number of constraints on adaptive item selection in the construction of computerized adaptive tests. The method, which emulates practices of expert test specialists, is illustrated for verbal and quantitative measures. Its foundation is application of a weighted deviations model and algorithm. (SLD)
Descriptors: Adaptive Testing, Algorithms, Computer Assisted Testing, Expert Systems
Peer reviewed Peer reviewed
Cohen, Allan S.; And Others – Applied Psychological Measurement, 1993
Three measures of differential item functioning for the dichotomous response model are extended to include Samejima's graded response model. Two are based on area differences between item true score functions, and one is a chi-square statistic for comparing differences in item parameters. (SLD)
Descriptors: Chi Square, Comparative Analysis, Identification, Item Bias
Peer reviewed Peer reviewed
Camilli, Gregory; And Others – Applied Psychological Measurement, 1993
Three potential causes of scale shrinkage (measurement error, restriction of range, and multidimensionality) in item response theory vertical equating are discussed, and a more comprehensive model-based approach to establishing vertical scales is described. Test data from the National Assessment of Educational Progress are used to illustrate the…
Descriptors: Equated Scores, Error of Measurement, Item Response Theory, Maximum Likelihood Statistics
Peer reviewed Peer reviewed
Taylor, Carol; Kirsch, Irwin; Jamieson, Joan; Eignor, Daniel – Language Learning, 1999
Administered a questionnaire focusing on examinees' computer familiarity to 90,000 Test of English as a Foreign Language test takers. A group of 1,200 low-computer-familiar and high-computer-familiar examinees' worked through a computer tutorial and a set of TOEFL test items. Concludes that no evidence exists of an adverse relationship between…
Descriptors: Comparative Analysis, Computer Assisted Testing, Computer Literacy, Familiarity
Peer reviewed Peer reviewed
Odafe, Victor U. – Mathematics Teacher, 1998
Describes how a researcher used student cooperative-learning teams to contribute test items. Discusses the questions generated by students and concludes that teachers have the flexibility to encourage students to generate and solve their own problems. Also concludes that students have the opportunity to be creative and formulate and pose questions…
Descriptors: Cooperative Learning, Group Activities, Mathematics Instruction, Mathematics Tests
Peer reviewed Peer reviewed
Freedle, Roy; Kostin, Irene – Intelligence, 1997
Factors that might influence differential item functioning values associated with 217 Scholastic Assessment Test and 234 Graduate Record Examination analogy items were studied with black examinees and white examinees. For 11 test forms, the median numbers of examinees were 6,265 blacks and 37,735 whites. The significant ethnic comparisons are…
Descriptors: Blacks, College Entrance Examinations, Cultural Differences, Ethnicity
Peer reviewed Peer reviewed
Qualls, Audrey L. – Educational Measurement: Issues and Practice, 2001
Studied students' erasure behavior on a standardized (low stakes) test. Results for 5,641 students in grades 4, 8, and 11 suggest that students do change their responses at an average of 6 to 7% of the tested items and that overall achievement appears unrelated to the student's tendency to erase and change a response. Discusses conditions under…
Descriptors: Cheating, Elementary School Students, Elementary Secondary Education, Responses
Clements, Andrea D.; Rothenberg, Lori – Research in the Schools, 1996
Undergraduate psychology examinations from 48 schools were analyzed to determine the proportion of items at each level of Bloom's Taxonomy, item format, and test length. Analyses indicated significant relationships between item complexity and test length even when taking format into account. Use of higher items may be related to shorter tests,…
Descriptors: Classification, Difficulty Level, Educational Objectives, Higher Education
Peer reviewed Peer reviewed
Embretson, Susan E. – Journal of Educational Measurement, 1996
Comparison of the correlates of two spatial ability tests that used the same item type but different test design principles (cognitive design versus psychometric design) indicated differences in the factorial complexity of the two tests. For the sample of 209 undergraduates, the impact of verbal abilities was substantially reduced by applying the…
Descriptors: Cognitive Processes, Correlation, Factor Structure, Higher Education
Pages: 1  |  ...  |  414  |  415  |  416  |  417  |  418  |  419  |  420  |  421  |  422  |  ...  |  636