NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers3
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 21 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Süleyman Demir; Derya Çobanoglu Aktan; Nese Güler – International Journal of Assessment Tools in Education, 2023
This study has two main purposes. Firstly, to compare the different item selection methods and stopping rules used in Computerized Adaptive Testing (CAT) applications with simulative data generated based on the item parameters of the Vocational Maturity Scale. Secondly, to test the validity of CAT application scores. For the first purpose,…
Descriptors: Computer Assisted Testing, Adaptive Testing, Vocational Maturity, Measures (Individuals)
Peer reviewed Peer reviewed
Direct linkDirect link
Glamocic, Džana Salibašic; Mešic, Vanes; Neumann, Knut; Sušac, Ana; Boone, William J.; Aviani, Ivica; Hasovic, Elvedin; Erceg, Nataša; Repnik, Robert; Grubelnik, Vladimir – Physical Review Physics Education Research, 2021
Item banks are generally considered the basis of a new generation of educational measurement. In combination with specialized software, they can facilitate the computerized assembling of multiple pre-equated test forms. However, for advantages of item banks to become fully realized it is important that the item banks store a relatively large…
Descriptors: Item Banks, Test Items, Item Response Theory, Item Sampling
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bashkov, Bozhidar M.; Clauser, Jerome C. – Practical Assessment, Research & Evaluation, 2019
Successful testing programs rely on high-quality test items to produce reliable scores and defensible exams. However, determining what statistical screening criteria are most appropriate to support these goals can be daunting. This study describes and demonstrates cost-benefit analysis as an empirical approach to determining appropriate screening…
Descriptors: Test Items, Test Reliability, Evaluation Criteria, Accuracy
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Özyurt, Hacer; Özyurt, Özcan – Eurasian Journal of Educational Research, 2015
Problem Statement: Learning-teaching activities bring along the need to determine whether they achieve their goals. Thus, multiple choice tests addressing the same set of questions to all are frequently used. However, this traditional assessment and evaluation form contrasts with modern education, where individual learning characteristics are…
Descriptors: Probability, Adaptive Testing, Computer Assisted Testing, Item Response Theory
Lorié, William A. – Online Submission, 2013
A reverse engineering approach to automatic item generation (AIG) was applied to a figure-based publicly released test item from the Organisation for Economic Cooperation and Development (OECD) Programme for International Student Assessment (PISA) mathematical literacy cognitive instrument as part of a proof of concept. The author created an item…
Descriptors: Numeracy, Mathematical Concepts, Mathematical Logic, Difficulty Level
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Sooyeon; Livingston, Samuel A. – Journal of Educational Measurement, 2010
Score equating based on small samples of examinees is often inaccurate for the examinee populations. We conducted a series of resampling studies to investigate the accuracy of five methods of equating in a common-item design. The methods were chained equipercentile equating of smoothed distributions, chained linear equating, chained mean equating,…
Descriptors: Equated Scores, Test Items, Item Sampling, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Burton, Richard F. – Assessment & Evaluation in Higher Education, 2006
Many academic tests (e.g. short-answer and multiple-choice) sample required knowledge with questions scoring 0 or 1 (dichotomous scoring). Few textbooks give useful guidance on the length of test needed to do this reliably. Posey's binomial error model of 1932 provides the best starting point, but allows neither for heterogeneity of question…
Descriptors: Item Sampling, Tests, Test Length, Test Reliability
Bedard, Roger – 1978
This paper briefly discusses the use of item sampling for student evaluation in a summative evaluation context. Its proposition is that item sampling is a means of achieving better and broader student evaluation because it leaves room for individualized evaluation. Item sampling can be implemented properly by associating it to the concept of item…
Descriptors: Achievement Tests, Educational Testing, Item Sampling, Learning Processes
Peer reviewed Peer reviewed
Hoste, R. – British Journal of Educational Psychology, 1981
In this paper, a proposal is made by which a content validity coefficient can be calculated. An example of the use of the coefficient is given, demonstrating that different question combinations in a CSE biology examination in which a choice of questions was given gave different levels of content validity. (Author)
Descriptors: Achievement Tests, Biology, Content Analysis, Item Sampling
Peer reviewed Peer reviewed
Passmore, David Lynn – Journal of Studies in Technical Careers, 1983
Vocational and technical education researchers need to be aware of the uses and limits of various statistical models. The author reviews the Rasch Model and applies it to results from a nutrition test given to student nurses. (Author)
Descriptors: Educational Research, Item Sampling, Nursing Education, Nutrition
Peer reviewed Peer reviewed
Askegaard, Lewis D.; Umila, Benwardo V. – Journal of Educational Measurement, 1982
Multiple matrix sampling of items and examinees was applied to an 18-item rank order instrument administered to a randomly assigned group and compared to the ordering and ranking of all items by control subjects. High correlations between ranks suggest the methodology may viably reduce respondent effort on long rank ordering tasks. (Author/CM)
Descriptors: Evaluation Methods, Item Sampling, Junior High Schools, Student Reaction
Heller, Joan I.; Curtis, Deborah A.; Jaffe, Rebecca; Verboncoeur, Carol J. – Online Submission, 2005
This study investigated the relationship between instructional use of handheld graphing calculators and student achievement in Algebra 1. Three end-of-course test forms were administered (without calculators) using matrix sampling to 458 high-school students in two suburban school districts in Oregon and Kansas. Test questions on two forms were…
Descriptors: Test Items, Standardized Tests, Suburban Schools, Item Sampling
Berk, Ronald A. – 1978
Sixteen item statistics recommended for use in the development of criterion-referenced tests were evaluated. There were two major criteria: (1) practicability in terms of ease of computation and interpretation and (2) meaningfulness in the context of the development process. Most of the statistics were based on a comparison of performance changes…
Descriptors: Achievement Tests, Criterion Referenced Tests, Difficulty Level, Guides
Klein-Braley, Christine – 1984
This report investigates the selection of appropriate texts for C-Tests, a modified form of the cloze test, for assessing second language learning. The procedure for textbook readability first involved the administration of different texts to sample groups to determine the C-test difficulty of individual texts. At the same time, a variety of…
Descriptors: Cloze Procedure, Difficulty Level, English (Second Language), Foreign Countries
Scheetz, James P.; Forsyth, Robert A. – 1977
Empirical evidence is presented related to the effects of using a stratified sampling of items in multiple matrix sampling on the accuracy of estimates of the population mean. Data were obtained from a sample of 600 high school students for a 36-item mathematics test and a 40-item vocabulary test, both subtests of the Iowa Tests of Educational…
Descriptors: Achievement Tests, Difficulty Level, Item Analysis, Item Sampling
Previous Page | Next Page »
Pages: 1  |  2