NotesFAQContact Us
Collection
Advanced
Search Tips
Education Level
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Showing 1 to 15 of 41 results Save | Export
Peer reviewed Peer reviewed
Harman, Donna – Information Processing and Management, 1992
Six articles about evaluating information retrieval (IR) systems and methods are briefly described in this introduction to a special issue and some of the major problems that are encountered in evaluating interactive information retrieval are considered. (two references) (LAE)
Descriptors: Evaluation Methods, Experiments, Information Retrieval, Online Searching
Smith, Nick L. – 1978
The Educational Resources Information Center (ERIC) system was used to identify 1,734 documents on program evaluation methodology. Attempts to further characterize these documents by subject were only partially successful because of the lack of sufficient depth of indexing and design constraints of the computer system. Attempts to characterize…
Descriptors: Bibliographies, Databases, Evaluation, Evaluation Methods
Peer reviewed Peer reviewed
Persin, Michael; And Others – Journal of the American Society for Information Science, 1996
Discusses ranking techniques for document retrieval and proposes an evaluation, or filtering, technique that reduces costs and memory usage by early recognition of which documents are likely to be highly ranked. Highlights include index design, ranked query evaluation, reducing the number of accumulators, inverted file structures, and experimental…
Descriptors: Algorithms, Computer Storage Devices, Costs, Evaluation Methods
Peer reviewed Peer reviewed
Borlund, Pia; Ingwersen, Peter – Journal of Documentation, 1997
Describes the development of a method for the evaluation and comparison of interactive information retrieval systems, which are based on the concept of a simulated work-task situation and the involvement of real end users. Highlights include real and simulated information needs; relevance assessments; and the dynamic nature of information needs.…
Descriptors: Comparative Analysis, Computer Simulation, Evaluation Methods, Information Needs
Peer reviewed Peer reviewed
Salton, Gerald – Information Processing and Management, 1992
The current state of information retrieval (IR) evaluation is reviewed with criticisms directed at the available test collections and the research and evaluation methodologies used, including precision and recall rates for online searches and laboratory tests not including real users. Automatic text retrieval systems are also discussed. (32…
Descriptors: Databases, Evaluation Methods, Indexing, Information Retrieval
Peer reviewed Peer reviewed
Dillon, Martin; Desper, James – Journal of Documentation, 1980
Describes a technique for automatic reformulation of Boolean queries which compares favorably with feedback as employed in a SMART system. Using patron relevance judgments, prevalence measures reflecting term distribution in relevant and nonrelevant documents are derived to guide the construction of a Boolean query for a subsequent retrieval.…
Descriptors: Evaluation Methods, Feedback, Information Retrieval, Information Seeking
PDF pending restoration PDF pending restoration
Hitchingham, Eileen E. – 1975
In an attempt to measure on-line searching performance, test searches of one question conducted by three bibliographic searchers are analysed to determine the effects of search strategy on results. The study reviews retrieved citations for relevance, calculating precision values, and estimated recall for each search. Performance analysis reveals…
Descriptors: Databases, Evaluation Methods, Information Retrieval, Literature Reviews
Peer reviewed Peer reviewed
Hersh, William R.; Hickam, David H. – Journal of the American Society for Information Science, 1995
Describes a study conducted at Oregon Health Sciences University that compared the use of three retrieval systems by medical students: a Boolean system, a word-based natural language system, and a concept-based natural language system. Results showed no statistically significant differences in recall, precision, or user preferences. (Author/LRW)
Descriptors: Comparative Analysis, Evaluation Methods, Higher Education, Information Retrieval
Peer reviewed Peer reviewed
Meadow, Charles T. – Canadian Journal of Information and Library Science, 1996
Proposes a method of evaluating information retrieval systems by concentrating on individual tools (commands, their menus or graphic interface equivalents, or a move/stratagem). A user would assess the relative success of a small part of a search, and every tool used in that part would be credited with a contribution to the result. Cumulative…
Descriptors: Computer Interfaces, Computer System Design, Evaluation Criteria, Evaluation Methods
Peer reviewed Peer reviewed
Siddiqui, Moid A. – Online Review, 1991
This review of the literature on full-text databases provides information on search strategy, performance measurement, and the benefits and limitations of full text compared to bibliographic database searching. Various use studies and uses of full-text databases are also listed. (21 references) (LAE)
Descriptors: Access to Information, Evaluation Methods, Full Text Databases, Information Retrieval
Peer reviewed Peer reviewed
Robertson, S. E.; Hancock-Beaulieu, M. M. – Information Processing and Management, 1992
The increasing complexity of evaluating information retrieval (IR) systems is discussed in terms of contributing factors and research methods. Examples are provided from experiments with weighted searching on a front-end system, information-seeking behavior and online catalogs, and the OKAPI experimental retrieval system. (21 references) (LAE)
Descriptors: Cognitive Processes, Databases, Evaluation Methods, Experiments
Blood, Richard W. – 1981
Based on an analysis of online search evaluation forms collected from all types of U.S. libraries, and a pilot test of a draft evaluation form in selected federal research libraries, this report presents the work of the American Library Association's (ALA's) Machine-Assisted Reference Section (MARS) Committee on Measurement and Evaluation. The…
Descriptors: Evaluation Criteria, Evaluation Methods, Evaluation Needs, Information Retrieval
Gould, Cheryl – 1998
An adaptation of a live workshop, this guide offers pertinent information and practical techniques for users to become proficient searchers and conscious evaluators of World Wide Web resources. The book does not provide comprehensive coverage of the Internet, rather it gives the exact amount and level of information needed to perform a successful…
Descriptors: Access to Information, Evaluation Methods, Information Retrieval, Information Seeking
Grisby, Alice B.; Hoffman, Herbert H. – Database, 1981
Examines the effectiveness of online searching for embedded stories, poems, essays, and other literary works in the Magazine Index and Modern Language Association Bibliography databases. Strategies and results of 16 test searches set up to test the access capabilities of the two databases are profiled. (SW)
Descriptors: Databases, Evaluation Methods, Information Retrieval, Literary Genres
Cornell Univ., Ithaca, NY. Dept. of Computer Science. – 1970
Two papers are included as Part Four of this report on Salton's Magical Automatic Retriever of Texts (SMART) project report. The first paper: "A Controlled Single Pass Classification Algorithm with Application to Multilevel Clustering" by D. B. Johnson and J. M. Laferente presents a single pass clustering method which compares favorably…
Descriptors: Algorithms, Automation, Classification, Cluster Grouping
Previous Page | Next Page ยป
Pages: 1  |  2  |  3