Publication Date
In 2025 | 81 |
Since 2024 | 396 |
Since 2021 (last 5 years) | 772 |
Since 2016 (last 10 years) | 798 |
Since 2006 (last 20 years) | 801 |
Descriptor
Source
Author
van der Linden, Wim J. | 17 |
Kiers, Henk A. L. | 13 |
ten Berge, Jos M. F. | 10 |
Gongjun Xu | 9 |
Gerlach, Vernon S. | 8 |
Willett, Peter | 8 |
Chun Wang | 7 |
Stocking, Martha L. | 7 |
Charp, Sylvia | 6 |
Chen, Hsinchun | 6 |
Craven, Timothy C. | 6 |
More ▼ |
Publication Type
Education Level
Audience
Practitioners | 255 |
Teachers | 126 |
Researchers | 114 |
Policymakers | 6 |
Administrators | 5 |
Students | 4 |
Counselors | 1 |
Media Staff | 1 |
Support Staff | 1 |
Location
Australia | 17 |
China | 17 |
Netherlands | 14 |
Turkey | 13 |
USSR | 10 |
United States | 10 |
California | 8 |
United Kingdom (England) | 7 |
Brazil | 6 |
Europe | 6 |
Germany | 6 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating

Price, Lydia J. – Multivariate Behavioral Research, 1993
The ability of the NORMIX algorithm to recover overlapping population structures was compared to the OVERCLUS procedure and another clustering procedure in a Monte Carlo study. NORMIX is found to be more accurate than other procedures in recovering overlapping population structure when appropriate implementation options are specified. (SLD)
Descriptors: Algorithms, Classification, Cluster Analysis, Comparative Analysis

Kostoff, Ronald N.; Eberhart, Henry J.; Toothman, Darrell Ray – Information Processing & Management, 1998
Database Tomography (DT) is a system which includes algorithms for extracting multi-word phrase frequencies and performing phrase proximity analyses on any type of large textual database. DT was used to derive technical intelligence from a near-earth space (NES) database derived from the Science Citation Index and the Engineering Compendex.…
Descriptors: Algorithms, Bibliometrics, Databases, Information Retrieval

Baker, Bruce D.; Richards, Craig E. – Economics of Education Review, 1999
Applies neural network methods for forecasting 1991-95 per-pupil expenditures in U.S. public elementary and secondary schools. Forecasting models included the National Center for Education Statistics' multivariate regression model and three neural architectures. Regarding prediction accuracy, neural network results were comparable or superior to…
Descriptors: Algorithms, Econometrics, Elementary Secondary Education, Expenditure per Student

Andaloro, G.; Bellomonte, L. – Computers & Education, 1998
Presents a student module modeling knowledge states and learning skills of students in the field of Newtonian dynamics. Uses data recorded during the exploratory activity in microworlds to infer mental representations concerning the concept of force. A fuzzy algorithm able to follow the cognitive states the student goes through in solving a task…
Descriptors: Algorithms, Cognitive Processes, Instructional Design, Knowledge Level

Abu Bakar, Zainab; Sembok, Tengku Mohd T.; Yusoff, Mohammed – Journal of the American Society for Information Science, 2000
Evaluates the effectiveness of spelling-correction and string-similarity matching methods in retrieving similar words in a Malay dictionary associated with a set of query words. Describes experiments that showed the best search combination used a stemming algorithm, and calculates retrieval effectiveness and relevant means measures from weighted…
Descriptors: Algorithms, Dictionaries, Evaluation Methods, Information Retrieval

Timpone, Richard J.; Taber, Charles S. – Social Science Computer Review, 1998
Compares traditional mathematical models with computer simulations. Shows the strengths and flexibility of algorithmic computational simulations through a program designed to investigate and extend understanding in one of the most enduring questions in social choice research. Discusses solutions to this problem from each approach--analytic and…
Descriptors: Algorithms, Computation, Computer Oriented Programs, Computer Simulation

Vrajitoru, Dana – Information Processing & Management, 1998
In information retrieval (IR), the aim of genetic algorithms (GA) is to help a system to find, in a huge documents collection, a good reply to a query expressed by the user. Analysis of phenomena seen during the implementation of a GA for IR has led to a new crossover operation, which is introduced and compared to other learning methods.…
Descriptors: Algorithms, Comparative Analysis, Information Retrieval, Information Seeking

Cappelleri, Joseph C.; And Others – Evaluation Review, 1994
A statistical power algorithm based on the Fisher Z method is developed for cutoff-based random clinical trials and the single cutoff-point (regression-discontinuity) design that has no randomization. This article quantifies power and sample size estimates for various levels of power and cutoff-based assignment. (Author/SLD)
Descriptors: Algorithms, Cutting Scores, Estimation (Mathematics), Power (Statistics)

Nakhleh, Mary B.; And Others – Journal of Chemical Education, 1996
Reports on a study, Project REMODEL, that implemented and evaluated innovations in lecture, laboratory, and assessment for students in the introductory sequence for undergraduate chemistry. Results indicate that special sessions and conceptual exam questions can improve students' abilities to work successfully with both concepts and algorithms.…
Descriptors: Algorithms, Chemistry, Educational Change, Educational Innovation
Bergstrom, Betty A.; Lunz, Mary E. – 1991
The equivalence of pencil and paper Rasch item calibrations when used in a computer adaptive test administration was explored in this study. Items (n=726) were precalibarted with the pencil and paper test administrations. A computer adaptive test was administered to 321 medical technology students using the pencil and paper precalibrations in the…
Descriptors: Ability, Adaptive Testing, Algorithms, Computer Assisted Testing
Gershon, Richard; Bergstrom, Betty – 1995
When examinees are allowed to review responses on an adaptive test, can they "cheat" the adaptive algorithm in order to take an easier test and improve their performance? Theoretically, deliberately answering items incorrectly will lower the examinee ability estimate and easy test items will be administered. If review is then allowed,…
Descriptors: Adaptive Testing, Algorithms, Cheating, Computer Assisted Testing
Davey, Tim; Parshall, Cynthia G. – 1995
Although computerized adaptive tests acquire their efficiency by successively selecting items that provide optimal measurement at each examinee's estimated level of ability, operational testing programs will typically consider additional factors in item selection. In practice, items are generally selected with regard to at least three, often…
Descriptors: Ability, Adaptive Testing, Algorithms, Computer Assisted Testing
Board, Raymond Acton – 1990
This thesis addresses problems from two areas of theoretical computer science. The first area is that of computational learning theory, which is the study of the phenomenon of concept learning using formal mathematical models. The goal of computational learning theory is to investigate learning in a rigorous manner through the use of techniques…
Descriptors: Algorithms, Computer Science, Computer Science Education, Higher Education
Ackerman, Terry A. – 1991
This paper examines the effect of using unidimensional item response theory (IRT) item parameter estimates of multidimensional items to create weakly parallel test forms using target information curves. To date, all computer-based algorithms that have been devised to create parallel test forms assume that the items are unidimensional. This paper…
Descriptors: Algorithms, Equations (Mathematics), Estimation (Mathematics), Item Response Theory

Sowder, Larry – Journal for Research in Mathematics Education, 1975
Two studies investigated the effects of verbalization of discovered generalizations on the retention of those principles. (SD)
Descriptors: Algorithms, Discovery Learning, Generalization, Higher Education