NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 6,241 to 6,255 of 9,552 results Save | Export
Peer reviewed Peer reviewed
Ban, Jae-Chun; Hanson, Bradley A.; Yi, Qing; Harris, Deborah J. – Journal of Educational Measurement, 2002
Compared three online pretest calibration scaling methods through simulation: (1) marginal maximum likelihood with one expectation maximization (EM) cycle (OEM) method; (2) marginal maximum likelihood with multiple EM cycles (MEM); and (3) M. Stocking's method B. MEM produced the smallest average total error in parameter estimation; OEM yielded…
Descriptors: Computer Assisted Testing, Error of Measurement, Maximum Likelihood Statistics, Online Systems
Peer reviewed Peer reviewed
Bline, Dennis; Lowe, Dana R.; Meixner, Wilda F.; Nouri, Hossein – Journal of Business Communication, 2003
Presents the results of an investigation about the effect of question order randomization on the psychometric properties of two frequently used oral and written apprehension instruments: McCroskey's oral communication apprehension scale and Daly and Miller's writing apprehension scale. Shows that the measurement properties of these instruments…
Descriptors: Communication Apprehension, Communication Research, Higher Education, Questionnaires
Peer reviewed Peer reviewed
Hynan, Linda S.; Foster, Barbara M. – Teaching of Psychology, 1997
Describes a project used in a sophomore-level psychological testing and measurement course. Students worked through the different phases of developing a test focused on item writing, reliability, and validity. Responses from both students and instructors have been consistently positive. (MJP)
Descriptors: Higher Education, Item Analysis, Item Response Theory, Psychological Testing
Peer reviewed Peer reviewed
Berger, Martijn P. F.; Veerkamp, Wim J. J. – Journal of Educational and Behavioral Statistics, 1997
Some alternative criteria for item selection in adaptive testing are proposed that take into account uncertainty in the ability estimates. A simulation study shows that the likelihood weighted information criterion is a good alternative to the maximum information criterion. Another good alternative uses a Bayesian expected a posteriori estimator.…
Descriptors: Ability, Adaptive Testing, Bayesian Statistics, Computer Assisted Testing
Peer reviewed Peer reviewed
Chang, Hua-Hua; Ying, Zhiliang – Applied Psychological Measurement, 1996
An item selection procedure for computerized adaptive testing based on average global information is proposed. Results from simulation studies comparing the approach with the usual maximum item information item selection indicate that the new method leads to improvement in terms of bias and mean squared error reduction under many circumstances.…
Descriptors: Adaptive Testing, Computer Assisted Testing, Error of Measurement, Item Response Theory
Peer reviewed Peer reviewed
Jodoin, Michael G. – Journal of Educational Measurement, 2003
Analyzed examinee responses to conventional (multiple-choice) and innovative item formats in a computer-based testing program for item response theory (IRT) information with the three parameter and graded response models. Results for more than 3,000 adult examines for 2 tests show that the innovative item types in this study provided more…
Descriptors: Ability, Adults, Computer Assisted Testing, Item Response Theory
Barnette, J. Jackson – Research in the Schools, 2001
Studied the primacy effect (tendency to select items closer to the left side of the response scale) in Likert scales worded from "Strongly Disagree" to "Strongly Agree" and in the opposite direction. Findings for 386 high school and college students show no primacy effect, although negatively worded stems had an effect on Cronbach's alpha. (SLD)
Descriptors: College Students, High School Students, High Schools, Higher Education
Peer reviewed Peer reviewed
Carter, Ronald; Long, Michael N. – ELT Journal, 1990
Explores the nature of examination questions in literature in teaching English-as-a-Foreign-Language (EFL). Three examples of questioning that are said to be more language based and that are suggested as supplements to conventional tests are discussed. These include general comprehension, textual focus, and personal response. (GLR)
Descriptors: English (Second Language), Literature Appreciation, Questioning Techniques, Second Language Instruction
Peer reviewed Peer reviewed
Adema, Jos J. – Journal of Educational Measurement, 1990
Mixed integer linear programing models for customizing two-stage tests are presented. Model constraints are imposed with respect to test composition, administration time, inter-item dependencies, and other practical considerations. The models can be modified for use in the construction of multistage tests. (Author/TJH)
Descriptors: Adaptive Testing, Computer Assisted Testing, Equations (Mathematics), Linear Programing
Peer reviewed Peer reviewed
Long, Vena M.; And Others – Mathematics Teacher, 1989
Discussed are experiences in using the calculator to assess mathematical understanding on the Missouri Mastery and Achievement Tests (MMAT). Data from a calculator group and a no-calculator group at the eighth- and tenth-grade levels are reported. Several items showed differences between groups. (YP)
Descriptors: Achievement Tests, Calculators, Mathematics, Mathematics Achievement
Peer reviewed Peer reviewed
Willson, Victor L. – Journal of Educational Measurement, 1989
Performance on items in intelligence and achievement tests can be represented in terms of child development and information processes. Research is reviewed on item performance that supports developmental and information processing effects, particularly in children. Some suggestions regarding item development are made. (Author/TJH)
Descriptors: Achievement Tests, Child Development, Cognitive Processes, Early Childhood Education
Peer reviewed Peer reviewed
Chalifour, Clark L.; Powers, Donald E. – Journal of Educational Measurement, 1989
Content characteristics of 1,400 Graduate Record Examination (GRE) analytical reasoning items were coded for item difficulty and discrimination. The results provide content characteristics for consideration in extending specifications for analytical reasoning items and a better understanding of the construct validity of these items. (TJH)
Descriptors: College Entrance Examinations, Construct Validity, Content Analysis, Difficulty Level
Peer reviewed Peer reviewed
Waks, S.; Barak, M. – Research in Science and Technological Education, 1988
Defines the Cognitive Difficulty Level (CDL) as number of schemes required for solution (NS) times the required learner's resources (Problem Solving Taxonomy [PST] level). Describes the validation procedures of the CDL index in high-school level electronics. (Author/YP)
Descriptors: Cognitive Ability, Content Analysis, Difficulty Level, Electronics
Peer reviewed Peer reviewed
Boldt, Robert F. – Language Testing, 1989
Attempts to identify latent variables affecting the item responses of the diverse language groups taking the Test of English As a Foreign Language indicated that latent group effects were small. Results support equating with item response theory and suggest the use of a restrictive assumption of proportionality of item response curves. (Author/CB)
Descriptors: English (Second Language), Item Response Theory, Language Proficiency, Language Tests
Peer reviewed Peer reviewed
Ilai, Doron; Willerman, Lee – Intelligence, 1989
Items showing sex differences on the revised Wechsler Adult Intelligence Scale (WAIS-R) were studied. In a sample of 206 young adults (110 males and 96 females), 15 items demonstrated significant sex differences, but there was no relationship of item-specific gender content to sex differences in item performance. (SLD)
Descriptors: Comparative Testing, Females, Intelligence Tests, Item Analysis
Pages: 1  |  ...  |  413  |  414  |  415  |  416  |  417  |  418  |  419  |  420  |  421  |  ...  |  637