Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 3 |
Descriptor
Comparative Analysis | 50 |
Evaluation Methods | 50 |
Information Retrieval | 45 |
Relevance (Information… | 25 |
Search Strategies | 15 |
Tables (Data) | 10 |
Databases | 8 |
Information Systems | 8 |
Models | 8 |
Online Searching | 8 |
Measurement Techniques | 7 |
More ▼ |
Source
Author
Publication Type
Education Level
Higher Education | 2 |
Postsecondary Education | 2 |
Elementary Education | 1 |
Elementary Secondary Education | 1 |
Grade 6 | 1 |
Intermediate Grades | 1 |
Junior High Schools | 1 |
Middle Schools | 1 |
Secondary Education | 1 |
Audience
Researchers | 3 |
Location
South Korea | 2 |
Asia | 1 |
Australia | 1 |
Brazil | 1 |
Connecticut | 1 |
Denmark | 1 |
Egypt | 1 |
Estonia | 1 |
Finland | 1 |
Florida | 1 |
France | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
Program for International… | 1 |
What Works Clearinghouse Rating
Rajagopal, Prabha; Ravana, Sri Devi – Information Research: An International Electronic Journal, 2017
Introduction: The use of averaged topic-level scores can result in the loss of valuable data and can cause misinterpretation of the effectiveness of system performance. This study aims to use the scores of each document to evaluate document retrieval systems in a pairwise system evaluation. Method: The chosen evaluation metrics are document-level…
Descriptors: Information Retrieval, Documentation, Scores, Information Systems

Keen, Michael – Journal of Documentation, 1997
Examines general issues in conducting information retrieval research. Topics include the Okapi information retrieval system and its probabilistic model; the Cranfield projects, concerning recall and precision; the SMART project with its vector-space model; evaluation methodology, including laboratory evaluation of interactive systems; and…
Descriptors: Comparative Analysis, Evaluation Methods, Information Retrieval, Laboratories

Bollmann, Peter; And Others – Information Processing and Management, 1992
Discusses the PRECALL, PRR (probability of relevance given retrieval), and EP (expected precision) approaches for dealing with the problem of weak ordering in information retrieval systems. Findings of two experiments comparing evaluation results obtained by PRR and EP are reported. Several mathematical formulas and proofs are included. (20…
Descriptors: Comparative Analysis, Evaluation Methods, Information Retrieval, Mathematical Formulas

Lopez-Pujalte, Cristina; Guerrero Bote, Vicente P.; Moya Anegon, Felix de – Information Processing & Management, 2002
Discussion of information retrieval, query optimization techniques, and relevance feedback focuses on genetic algorithms, which are derived from artificial intelligence techniques. Describes an evaluation of different genetic algorithms using a residual collection method and compares results with the Ide dec-hi method (Salton and Buckley, 1990…
Descriptors: Algorithms, Artificial Intelligence, Comparative Analysis, Evaluation Methods

Shaw, W. M., Jr.; And Others – Information Processing & Management, 1997
Describes a study that computed the low performance standards for queries in 17 test collections. Predicted by the hypergeometric distribution, the standards represent the highest level of retrieval effectiveness attributable to chance. Operational levels of performance for vector-space and other retrieval models were compared to the standards.…
Descriptors: Comparative Analysis, Evaluation Methods, Information Retrieval, Measurement Techniques

Wong, S. K. M.; And Others – Journal of the American Society for Information Science, 1991
Discussion of user queries in information retrieval highlights the experimental evaluation of an adaptive linear model that constructs improved query vectors from user preference judgments on a sample set of documents. The performance of this method is compared with that of standard relevance feedback techniques. (28 references) (LRW)
Descriptors: Algorithms, Comparative Analysis, Evaluation Methods, Information Retrieval

Frei, H. P.; Schauble, P. – Information Processing and Management, 1991
Describes a new effectiveness measure as an alternative to the traditional evaluation measures of recall and precision in information retrieval systems. The statistical approach--which compares two retrieval algorithms--is explained, the information needs of the user are considered, and an experiment with a test collection of abstracts is…
Descriptors: Algorithms, Comparative Analysis, Evaluation Methods, Information Retrieval

Su, Louise T. – Information Processing & Management, 1998
Discusses value of search results as a whole, a measure which asks for a user's rating of the usefulness of a set of search results based on a Likert scale, and suggests that this measure provides a simple way for system comparison and eliminates problems of information-retrieval evaluation with multiple measures. (Author/LRW)
Descriptors: Comparative Analysis, Evaluation Methods, Evaluation Problems, Information Retrieval

Eastman, Caroline M. – Information Processing and Management, 1988
Summarizes the similarities between classification systems for catalog selection and information retrieval systems, and discusses system characteristics that allow the use of measures such as recall and precision in system evaluation. (27 references) (Author/CLB)
Descriptors: Classification, Comparative Analysis, Evaluation Criteria, Evaluation Methods

Borlund, Pia; Ingwersen, Peter – Journal of Documentation, 1997
Describes the development of a method for the evaluation and comparison of interactive information retrieval systems, which are based on the concept of a simulated work-task situation and the involvement of real end users. Highlights include real and simulated information needs; relevance assessments; and the dynamic nature of information needs.…
Descriptors: Comparative Analysis, Computer Simulation, Evaluation Methods, Information Needs

Keen, E. Michael – Information Processing and Management, 1992
Various methods for calculating and presenting results of information retrieval (IR) evaluation research are discussed, with illustrations from recent laboratory results. Highlights include measures for calculating recall and precision, user needs for high or low level of recall, and comparisons of Boolean and ranked output retrieval. (22…
Descriptors: Comparative Analysis, Evaluation Criteria, Evaluation Methods, Experiments
Kantor, Paul; Kim, Myung-Ho; Ibraev, Ulukbek; Atasoy, Koray – Proceedings of the ASIS Annual Meeting, 1999
In assessing information retrieval systems, it is important to compare the number of retrieved relevant items to the total number of relevant items. A variant of the statistical "capture-recapture" method can be used in large collections to estimate total number of relevant documents, providing the model supporting such an analysis can…
Descriptors: Comparative Analysis, Data Analysis, Evaluation Methods, Information Retrieval
Marcus, Richard S. – Proceedings of the ASIS Annual Meeting, 1991
Discusses general issues of computer and human understanding; contrasts three paradigms of information retrieval methodology, including statistical, deep semantic or natural language, and smart Boolean; describes CONIT, a knowledge-based intermediary retrieval assistance system; and examines system evaluation procedures, including a…
Descriptors: Comparative Analysis, Evaluation Methods, Expert Systems, Information Retrieval

Ro, Jung Soon – Journal of the American Society for Information Science, 1988
A comparison of the effectiveness of information retrieval based on full-text documents with retrieval based on paragraphs, abstracts, or controlled vocabularies was accomplished using a subset of journal articles with nine search questions. It was found that full-text retrieval achieved significantly higher recall and lower precision than did the…
Descriptors: Automatic Indexing, Comparative Analysis, Databases, Evaluation Methods
An Evaluation of Interactive Boolean and Natural Language Searching with an Online Medical Textbook.

Hersh, William R.; Hickam, David H. – Journal of the American Society for Information Science, 1995
Describes a study conducted at Oregon Health Sciences University that compared the use of three retrieval systems by medical students: a Boolean system, a word-based natural language system, and a concept-based natural language system. Results showed no statistically significant differences in recall, precision, or user preferences. (Author/LRW)
Descriptors: Comparative Analysis, Evaluation Methods, Higher Education, Information Retrieval