Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 1 |
Descriptor
Comparative Testing | 4 |
Mathematical Models | 4 |
Scores | 4 |
Estimation (Mathematics) | 3 |
Computer Simulation | 2 |
Error of Measurement | 2 |
Test Items | 2 |
Academic Ability | 1 |
Administrator Education | 1 |
Blacks | 1 |
Cognitive Ability | 1 |
More ▼ |
Author
Chang, Yu-Wen | 1 |
Davison, Mark L. | 1 |
Kurz, M. Elizabeth | 1 |
Miller, Timothy R. | 1 |
Qualls-Payne, Audrey L. | 1 |
Spray, Judith A. | 1 |
Yoder, S. Elizabeth | 1 |
Publication Type
Reports - Evaluative | 3 |
Journal Articles | 2 |
Reports - Research | 2 |
Speeches/Meeting Papers | 1 |
Education Level
Higher Education | 1 |
Postsecondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
Iowa Tests of Basic Skills | 1 |
What Works Clearinghouse Rating
Yoder, S. Elizabeth; Kurz, M. Elizabeth – Journal of Education for Business, 2015
Linear programming (LP) is taught in different departments across college campuses with engineering and management curricula. Modeling an LP problem is taught in every linear programming class. As faculty teaching in Engineering and Management departments, the depth to which teachers should expect students to master this particular type of…
Descriptors: Programming, Educational Practices, Engineering, Engineering Education

Qualls-Payne, Audrey L. – Journal of Educational Measurement, 1992
Six methods for estimating the standard error of measurement (SEM) at specific score levels are compared by comparing score level SEM estimates from a single test administration to estimates from two test administrations, using Iowa Tests of Basic Skills data for 2,138 examinees. L. S. Feldt's method is preferred. (SLD)
Descriptors: Comparative Testing, Elementary Education, Elementary School Students, Error of Measurement
Chang, Yu-Wen; Davison, Mark L. – 1992
Standard errors and bias of unidimensional and multidimensional ability estimates were compared in a factorial, simulation design with two item response theory (IRT) approaches, two levels of test correlation (0.42 and 0.63), two sample sizes (500 and 1,000), and a hierarchical test content structure. Bias and standard errors of subtest scores…
Descriptors: Comparative Testing, Computer Simulation, Correlation, Error of Measurement
Spray, Judith A.; Miller, Timothy R. – 1992
A popular method of analyzing test items for differential item functioning (DIF) is to compute a statistic that conditions samples of examinees from different populations on an estimate of ability. This conditioning or matching by ability is intended to produce an appropriate statistic that is sensitive to true differences in item functioning,…
Descriptors: Blacks, College Entrance Examinations, Comparative Testing, Computer Simulation