NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 9 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Basman, Munevver – International Journal of Assessment Tools in Education, 2023
To ensure the validity of the tests is to check that all items have similar results across different groups of individuals. However, differential item functioning (DIF) occurs when the results of individuals with equal ability levels from different groups differ from each other on the same test item. Based on Item Response Theory and Classic Test…
Descriptors: Test Bias, Test Items, Test Validity, Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Musa Adekunle Ayanwale; Mdutshekelwa Ndlovu – Journal of Pedagogical Research, 2024
The COVID-19 pandemic has had a significant impact on high-stakes testing, including the national benchmark tests in South Africa. Current linear testing formats have been criticized for their limitations, leading to a shift towards Computerized Adaptive Testing [CAT]. Assessments with CAT are more precise and take less time. Evaluation of CAT…
Descriptors: Adaptive Testing, Benchmarking, National Competency Tests, Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Aminifar, Elahe; Alipour, Mohammad – European Journal of Educational Sciences, 2014
Item bank is one of the main components of adaptive tests. In this research, a test was made in order to design and calibrate items for Homogeneous Second Order Differential Equations. The items were designed according to the goal-content's table of the subject and the Bloom's taxonomy learning domain. Validity and reliability of these items was…
Descriptors: Test Items, Calculus, Mathematics Tests, Mathematics Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Yoo, Jin Eun – Educational and Psychological Measurement, 2009
This Monte Carlo study investigates the beneficiary effect of including auxiliary variables during estimation of confirmatory factor analysis models with multiple imputation. Specifically, it examines the influence of sample size, missing rates, missingness mechanism combinations, missingness types (linear or convex), and the absence or presence…
Descriptors: Monte Carlo Methods, Research Methodology, Test Validity, Factor Analysis
Peer reviewed Peer reviewed
Hattie, John – Multivariate Behavioral Research, 1984
This paper describes a simulation that determines the adequacy of various indices as decision criteria for assessing unidimensionality. Using the sum of absolute residuals from the two-parameter latent trait model, indices were obtained that could discriminate between one latent trait and more than one latent trait. (Author/BW)
Descriptors: Achievement Tests, Latent Trait Theory, Mathematical Models, Monte Carlo Methods
Peer reviewed Peer reviewed
Garg, Rashmi; And Others – Journal of Educational Measurement, 1986
For the purpose of obtaining data to use in test development, multiple matrix sampling plans were compared to examinee sampling plans. Data were simulated for examinees, sampled from a population with a normal distribution of ability, responding to items selected from an item universe. (Author/LMO)
Descriptors: Difficulty Level, Monte Carlo Methods, Sampling, Statistical Studies
Peer reviewed Peer reviewed
Direct linkDirect link
van der Linden, Wim J. – Journal of Educational and Behavioral Statistics, 2006
A lognormal model for the response times of a person on a set of test items is investigated. The model has a parameter structure analogous to the two-parameter logistic response models in item response theory, with a parameter for the speed of each person as well as parameters for the time intensity and discriminating power of each item. It is…
Descriptors: Test Items, Vocational Aptitude, Reaction Time, Markov Processes
Hisama, Kay K.; And Others – 1977
The optimal test length, using predictive validity as a criterion, depends on two major conditions: the appropriate item-difficulty rather than the total number of items, and the method used in scoring the test. These conclusions were reached when responses to a 100-item multi-level test of reading comprehension from 136 non-native speakers of…
Descriptors: College Students, Difficulty Level, English (Second Language), Foreign Students
Peer reviewed Peer reviewed
Ackerman, Terry A. – Journal of Educational Measurement, 1992
The difference between item bias and item impact and the way they relate to item validity are discussed from a multidimensional item response theory perspective. The Mantel-Haenszel procedure and the Simultaneous Item Bias strategy are used in a Monte Carlo study to illustrate detection of item bias. (SLD)
Descriptors: Causal Models, Computer Simulation, Construct Validity, Equations (Mathematics)