NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 17 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Britson, Carol A. – HAPS Educator, 2022
Reflections on the efficacy of pedagogical changes and practices and their effect on student performance are often hindered by incomplete data, small sample sizes, and the confounding variables of multiple instructors and teaching sites. Observations from such retrospective analyses, however, are highly sought after by instructors and…
Descriptors: Anatomy, Physiology, COVID-19, Pandemics
Peer reviewed Peer reviewed
Direct linkDirect link
Tintle, Nathan; Clark, Jake; Fischer, Karen; Chance, Beth; Cobb, George; Roy, Soma; Swanson, Todd; VanderStoep, Jill – Journal of Statistics Education, 2018
The recent simulation-based inference (SBI) movement in algebra-based introductory statistics courses (Stat 101) has provided preliminary evidence of improved student conceptual understanding and retention. However, little is known about whether these positive effects are preferentially distributed across types of students entering the course. We…
Descriptors: Statistics, College Mathematics, College Preparation, Mathematical Concepts
Peer reviewed Peer reviewed
Direct linkDirect link
Thompson, Andrew R.; Lowrie, Donald J., Jr. – Anatomical Sciences Education, 2017
Changes in medical school curricula often require educators to develop teaching strategies that decrease contact hours while maintaining effective pedagogical methods. When faced with this challenge, faculty at the University of Cincinnati College of Medicine converted the majority of in-person histology laboratory sessions to self-study modules…
Descriptors: Independent Study, Anatomy, Medical Education, Outcomes of Education
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bulut, Okan; Kan, Adnan – Eurasian Journal of Educational Research, 2012
Problem Statement: Computerized adaptive testing (CAT) is a sophisticated and efficient way of delivering examinations. In CAT, items for each examinee are selected from an item bank based on the examinee's responses to the items. In this way, the difficulty level of the test is adjusted based on the examinee's ability level. Instead of…
Descriptors: Adaptive Testing, Computer Assisted Testing, College Entrance Examinations, Graduate Students
Peer reviewed Peer reviewed
Direct linkDirect link
Belov, Dmitry I.; Armstrong, Ronald D.; Weissman, Alexander – Applied Psychological Measurement, 2008
This article presents a new algorithm for computerized adaptive testing (CAT) when content constraints are present. The algorithm is based on shadow CAT methodology to meet content constraints but applies Monte Carlo methods and provides the following advantages over shadow CAT: (a) lower maximum item exposure rates, (b) higher utilization of the…
Descriptors: Test Items, Monte Carlo Methods, Law Schools, Adaptive Testing
Peer reviewed Peer reviewed
Ramsay, J. O. – Psychometrika, 1991
Kernel smoothing methods for nonparametric item characteristic curve estimation are reviewed. A simulation with 500 examinees and real data from 3,000 records of the Graduate Record Examination illustrate the rapidity of kernel smoothing. Even when population curves are three-parameter logistic, simulation suggests no loss of efficiency. (SLD)
Descriptors: College Entrance Examinations, Computer Simulation, Efficiency, Equations (Mathematics)
Peer reviewed Peer reviewed
Stout, William – Psychometrika, 1987
A procedure--based on item response theory--for testing the hypothesis of unidimensionality of the latent space is proposed. Use of the procedure is supported by an asymptotic theory and a Monte Carlo simulation study. The procedure tests for unidimensionality in test construction and/or compares two tests. (SLD)
Descriptors: College Entrance Examinations, Computer Simulation, Equations (Mathematics), Hypothesis Testing
Stocking, Martha L.; Eignor, Daniel R. – 1986
In item response theory (IRT), preequating depends upon item parameter estimate invariance. Three separate simulations, all using the unidimensional three-parameter logistic item response model, were conducted to study the impact of the following variables on preequating: (1) mean differences in ability; (2) multidimensionality in the data; and…
Descriptors: College Entrance Examinations, Computer Simulation, Equated Scores, Error of Measurement
Peer reviewed Peer reviewed
McKinley, Robert L. – Journal of Educational Measurement, 1988
Six procedures for combining sets of item response theory (IRT) item parameter estimates from different samples were evaluated using real and simulated response data. Results support use of covariance matrix-weighted averaging and a procedure using sample-size-weighted averaging of estimated item characteristic curves at the center of the ability…
Descriptors: College Entrance Examinations, Comparative Analysis, Computer Simulation, Estimation (Mathematics)
Mislevy, Robert J.; Bock, R. Darrell – 1982
This paper reviews the basic elements of the EM approach to estimating item parameters and illustrates its use with one simulated and one real data set. In order to illustrate the use of the BILOG computer program, runs for 1-, 2-, and 3-parameter models are presented for the two sets of data. First is a set of responses from 1,000 persons to five…
Descriptors: College Entrance Examinations, Computer Oriented Programs, Computer Simulation, Computer Software
Eignor, Daniel R.; And Others – 1993
The extensive computer simulation work done in developing the computer adaptive versions of the Graduate Record Examinations (GRE) Board General Test and the College Board Admissions Testing Program (ATP) Scholastic Aptitude Test (SAT) is described in this report. Both the GRE General and SAT computer adaptive tests (CATs), which are fixed length…
Descriptors: Adaptive Testing, Algorithms, Case Studies, College Entrance Examinations
Peer reviewed Peer reviewed
Kennedy, Peter; Walstad, William B. – Applied Measurement in Education, 1997
The consequences in terms of misclassifications of students that would occur by replacing the constructed-response portion of the Advanced Placement (AP) examinations in economics with more multiple-choice items were studied. The 1991 AP examinations in micro- and macroeconomics were used. Computer simulation found that a small but statistically…
Descriptors: Classification, College Entrance Examinations, Computer Simulation, Constructed Response
Ackerman, Terry A. – 1987
The purpose of this study was to investigate the effect of using multidimensional items in a computer adaptive test (CAT) setting which assumes a unidimensional item response theory (IRT) framework. Previous research has suggested that the composite of multidimensional abilities being estimated by a unidimensional IRT model is not constant…
Descriptors: Adaptive Testing, College Entrance Examinations, Computer Assisted Testing, Computer Simulation
Spray, Judith A.; Miller, Timothy R. – 1992
A popular method of analyzing test items for differential item functioning (DIF) is to compute a statistic that conditions samples of examinees from different populations on an estimate of ability. This conditioning or matching by ability is intended to produce an appropriate statistic that is sensitive to true differences in item functioning,…
Descriptors: Blacks, College Entrance Examinations, Comparative Testing, Computer Simulation
Levine, Michael V.; Drasgow, Fritz – 1984
Some examinees' test-taking behavior may be so idiosyncratic that their scores are not comparable to the scores of more typical examinees. Appropriateness indices, which provide quantitative measures of response-pattern atypicality, can be viewed as statistics for testing a null hypothesis of normal test-taking behavior against an alternative…
Descriptors: Cheating, College Entrance Examinations, Computer Simulation, Estimation (Mathematics)
Previous Page | Next Page ยป
Pages: 1  |  2