NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 6,676 to 6,690 of 16,779 results Save | Export
Plumer, Gilbert E. – 1999
The nontechnical ability to identify or match argumentative structure is considered by many to be an important reasoning skill. Instruments that have questions designed to measure this skill include major standardized tests for graduate school admission, for example, the Law School Admission Test (LSAT), the Graduate Record Examination (GRE), and…
Descriptors: College Entrance Examinations, Persuasive Discourse, Test Construction, Test Items
Matthews-Lopez, Joy L.; Hombo, Catherine M. – 2001
The purpose of this study was to examine the recovery of item parameters in simulated Automatic Item Generation (AIG) conditions, using Markov chain Monte Carlo (MCMC) estimation methods to attempt to recover the generating distributions. To do this, variability in item and ability parameters was manipulated. Realistic AIG conditions were…
Descriptors: Estimation (Mathematics), Monte Carlo Methods, Statistical Distributions, Test Construction
Rhodes, Lynn K., Ed. – 1993
This handbook contains instruments to gather literacy assessment data. The instruments are either discussed in "Windows into Literacy: Assessing Learners, K-8," or they are related to other instruments discussed in that book. The instruments may be photocopied by teachers for use in assessment or revised to answer teachers' questions about their…
Descriptors: Elementary Education, Literacy, Measures (Individuals), Student Evaluation
Kifer, Edward – 2001
This discussion of the challenges associated with contemporary assessments opens with the description of an assessment grid. The grid, which contains 11 dimensions on which assessments may vary and examples of its use, form the first third of the book. The National Assessment of Educational Progress, two international studies, and the Kentucky…
Descriptors: Educational Assessment, Elementary Secondary Education, Student Evaluation, Test Construction
Rudner, Lawrence M. – 2000
This digest introduces ways of responding to the call for criterion-referenced information using Bayes' Theorem, a method that was coupled with criterion-referenced testing in the early 1970s (see R. Hambleton and M. Novick, 1973). To illustrate Bayes' Theorem, an example is given in which the goal is to classify an examinee as being a master or…
Descriptors: Adaptive Testing, Bayesian Statistics, Criterion Referenced Tests, Test Construction
Moskal, Barbara M. – 2003
This Digest draws from the current literature and the author's experience to identify suggestions for developing performance assessments and their accompanying scoring rubrics. This Digest, part 1, addresses writing goals and objectives and developing performance assessments. Before a performance assessment or scoring rubric is written or…
Descriptors: Measurement Techniques, Performance Based Assessment, Scoring Rubrics, Test Construction
Fisher, William P.; Suttkus, Ramona; DiCarlo, Richard – 2000
This paper shows that a substantial degree of invariance can be attained with an examination not explicitly designed to do so, provides an example of how invariance can be demonstrated through plots, and dispels misconceptions concerning the rigidity of the definition of invariance. Responses of 177 examinees to 94 items of a final examination…
Descriptors: Higher Education, Medical Education, Medical Students, Scaling
Leung, Chi K.; Chang, Hua H.; Hau, Kit T. – 1999
An a-stratified design (H. Chang and Z. Ying, 1997) is a new concept proposed to address the issues of item security and pool utilization in testing. It has been demonstrated to be effective in lowering the test overlap rate and improving the use of the entire pool when content constraints are not main concerns. However, it cannot really solve the…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Test Construction
Mislevy, Robert J.; Steinberg, Linda S.; Almond, Russell G. – 1999
Tasks are the most visible element in an educational assessment. Their purpose, however, is to provide evidence about targets of inference that cannot be directly seen at all: what examinees know and can do, more broadly conceived than can be observed in the context of any particular set of tasks. This paper concerns issues in an assessment design…
Descriptors: Educational Assessment, Evaluation Methods, Higher Education, Models
Thomas, Susan J. – 1999
Creating a survey that asks the right questions at a level appropriate for the intended audience is a difficult task. This guide is designed to support educators who want to be confident that the data they gather will be useful. The guide is organized according to the developmental steps in creating a survey. Individual chapters correspond to the…
Descriptors: Data Collection, Planning, Questionnaires, Research Design
Luecht, Richard M. – 2000
Computerized testing has created new challenges for the production and administration of test forms. This paper describes a multi-stage, testlet-based framework for test design, assembly, and administration called computer-adaptive sequential testing (CAST). CAST is a structured testing approach that is amenable to both adaptive and mastery…
Descriptors: Adaptive Testing, Computer Assisted Testing, Mastery Tests, Test Construction
Peer reviewed Peer reviewed
Dudycha, Arthur L.; Carpenter, James B. – Journal of Applied Psychology, 1973
In this study, three structural characteristics--stem format, inclusive versus specific distracters, and stem orientation--were selected for experimental manipulation, while the number of alternatives, the number of correct answers, and the order of items were experimentally controlled. (Author)
Descriptors: Discriminant Analysis, Item Analysis, Multiple Choice Tests, Test Construction
Peer reviewed Peer reviewed
Kruger, Irwin – Journal of Educational and Psychological Measurement, 1974
Descriptors: Computer Programs, Item Banks, Multiple Choice Tests, Test Construction
Peer reviewed Peer reviewed
Smith, A. G. – Australian Science Teachers Journal, 1972
Presents the theoretical advantages of banks of test items from which tests with pre-determined characteristics can be constructed, with particular emphasis on the possibility of providing comparable achievement data concerning students from different schools without forcing all to take exactly the same test. Reviews some related literature. (AL)
Descriptors: Achievement Tests, Evaluation, Secondary School Science, Test Construction
Peer reviewed Peer reviewed
Layton, Frances – Alberta Journal of Educational Research, 1973
Purpose of this study was to test a short version of the Stanford-Binet, Form L-M using a group covering a wide age and ability level in an attempt to reduce the time factor involved in administration of some of the S-B tests, without sacrificing the reported accuracy. (Author/CB)
Descriptors: Intelligence Tests, Scoring Formulas, Tables (Data), Test Construction
Pages: 1  |  ...  |  442  |  443  |  444  |  445  |  446  |  447  |  448  |  449  |  450  |  ...  |  1119