NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 7,996 to 8,010 of 9,530 results Save | Export
Kingsbury, G. Gage; And Others – Technological Horizons in Education, 1988
Explores what some deem the best way to objectively determine what a student knows. Adaptive Testing has been around since the early 1900's, but only with the advent of computers has it been effectively applied to day to day educational management. Cites a pilot study in Portland, Oregon, public schools. (MVL)
Descriptors: Administration, Computer Uses in Education, Diagnostic Teaching, Individual Needs
Peer reviewed Peer reviewed
Willis, John A. – Educational Measurement: Issues and Practice, 1990
The Learning Outcome Testing Program of the West Virginia Department of Education is designed to provide public school teachers/administrators with test questions matching learning outcomes. The approach, software selection, results of pilot tests with teachers in 13 sites, and development of test items for item banks are described. (SLD)
Descriptors: Classroom Techniques, Computer Assisted Testing, Computer Managed Instruction, Elementary Secondary Education
Peer reviewed Peer reviewed
Dorton, Ian – Economics, 1989
Examines the organization of the extended project that is part of the General Certificate of Secondary Education (GCSE) A Level Business Studies examination. Provides a timetable for implementing the project. Includes student evaluations of the project. (LS)
Descriptors: Achievement Tests, Business Education, Economics, Economics Education
Peer reviewed Peer reviewed
Bresnock, Anne E.; And Others – Journal of Economic Education, 1989
Investigates the effects on multiple choice test performance of altering the order and placement of questions and responses. Shows that changing the response pattern appears to alter significantly the apparent degree of difficulty. Response patterns become more dissimilar under certain types of response alterations. (LS)
Descriptors: Cheating, Economics Education, Educational Research, Grading
Peer reviewed Peer reviewed
Lukhele, Robert; And Others – Journal of Educational Measurement, 1994
Fitting item response models to data from 2 Advanced Placement exams (18,462 and 82,842 students) demonstrates that constructed response items add little to information provided by multiple choice and that scoring on the basis of student item selection gives almost as much information as scoring on the basis of answers. (SLD)
Descriptors: Achievement Tests, Advanced Placement, Chemistry, Constructed Response
Peer reviewed Peer reviewed
Carpenter, Patricia A.; And Others – Psychological Review, 1990
Cognitive processes in the Raven Progressive Matrices Test, a nonverbal test of analytic intelligence, are analyzed in terms of processes distinguishing between high- and low-scoring students and processes common to all subjects and test items. Two experiments with 89 college students identify the abilities distinguishing among individuals. (SLD)
Descriptors: Ability, Cognitive Processes, College Students, Computer Simulation
Peer reviewed Peer reviewed
Mislevy, Robert J.; And Others – Journal of Educational Measurement, 1992
Concepts behind plausible values in estimating population characteristics from sparse matrix samples of item responses are discussed. The use of marginal analyses is described in the context of the National Assessment of Educational Progress, and the approach is illustrated with Scholastic Aptitude Test data for 9,075 high school seniors. (SLD)
Descriptors: College Entrance Examinations, Educational Assessment, Equations (Mathematics), Estimation (Mathematics)
Peer reviewed Peer reviewed
Martinez, Michael E.; Bennett, Randy Elliot – Applied Measurement in Education, 1992
New developments in the use of automatically scorable constructed response item types for large-scale assessment are reviewed for five domains: (1) mathematical reasoning; (2) algebra problem solving; (3) computer science; (4) architecture; and (5) natural language. Ways in which these technologies are likely to shape testing are considered. (SLD)
Descriptors: Algebra, Architecture, Automation, Computer Science
Peer reviewed Peer reviewed
Fowler, Robert L.; Clingman, Joy M. – Educational and Psychological Measurement, 1992
Monte Carlo techniques are used to examine the power of the "B" statistic of R. L. Brennan (1972) to detect negatively discriminating items drawn from a variety of nonnormal population distributions. A simplified procedure is offered for conducting an item-discrimination analysis on typical classroom objective tests. (SLD)
Descriptors: Classroom Techniques, Elementary Secondary Education, Equations (Mathematics), Item Analysis
Peer reviewed Peer reviewed
Braxton, John M. – Journal of Higher Education, 1993
A study investigated the relationship between undergraduate admissions selectivity at 40 research universities and academic rigor of course examination questions, as determined by the level of understanding required. Results suggest that more selective institutions do not provide more academically rigorous instruction than less selective ones.…
Descriptors: Academic Standards, Admission Criteria, College Admission, Comparative Analysis
Peer reviewed Peer reviewed
Kerkman, Dennis D.; And Others – Teaching of Psychology, 1994
Reports on a study of 96 undergraduate developmental psychology students and their performance on student-developed "pop quizzes." Students who participated in writing test items had significantly higher scores than students who did not. Calls for more research into the effectiveness of other student-developed evaluation methods. (CFR)
Descriptors: Academic Achievement, Course Content, Educational Strategies, Higher Education
Peer reviewed Peer reviewed
Beller, Michael – Applied Psychological Measurement, 1990
Geometric approaches to representing interrelations among tests and items are compared with an additive tree model (ATM), using 2,644 examinees and 2 other data sets. The ATM's close fit to the data and its coherence of presentation indicate that it is the best means of representing tests and items. (TJH)
Descriptors: College Students, Comparative Analysis, Factor Analysis, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Yuan H.; Lissitz, Robert W. – Journal of Educational Measurement, 2004
The analytically derived asymptotic standard errors (SEs) of maximum likelihood (ML) item estimates can be approximated by a mathematical function without examinees' responses to test items, and the empirically determined SEs of marginal maximum likelihood estimation (MMLE)/Bayesian item estimates can be obtained when the same set of items is…
Descriptors: Test Items, Computation, Item Response Theory, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Cheng, Shein-Yung; Lin, Chia-Sheng Lin; Chen, Hsien-Hsun; Heh, Jia-Sheng – Computers and Education, 2005
In a classroom, a teacher attempts to convey his or her knowledge to the students, and thus it is important for the teacher to obtain formative feedback about how well students are understanding the new material. By gaining insight into the students' understanding and possible misconceptions, the teacher will be able to adjust the teaching and to…
Descriptors: Misconceptions, Teaching Methods, Formative Evaluation, Feedback
Peer reviewed Peer reviewed
Direct linkDirect link
Harlow, Ann; Jones, Alister – Research in Science Education, 2004
The purpose of this study was to explore how Year 8 students answered Third International Mathematics and Science Study (TIMSS) questions and whether the test questions represented the scientific understanding of these students. One hundred and seventy-seven students were tested using written test questions taken from the science test used in the…
Descriptors: Science Tests, Test Items, Grade 8, Scientific Concepts
Pages: 1  |  ...  |  530  |  531  |  532  |  533  |  534  |  535  |  536  |  537  |  538  |  ...  |  636