NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 2,251 to 2,265 of 9,031 results Save | Export
Peer reviewed Peer reviewed
Greisdorf, Howard – Information Processing & Management, 2003
Examines end-user judgment and evaluation behavior during information retrieval (IR) system interactions and extends previous research surrounding relevance as a key construct for representing the value end-users ascribe to items retrieved from IR systems. The self-reporting worksheet is appended. (Author/AEF)
Descriptors: Evaluation Criteria, Evaluation Methods, Information Retrieval, Information Seeking
Peer reviewed Peer reviewed
Bordogna, G; And Others – Information Processing and Management, 1991
Presents an analytical approach to the interpretation of weighted Boolean queries. By distinguishing query term weights from query weights, a query becomes a means of describing classes of ideal documents and expressing relativity criteria among these descriptions. A formalization of query term weights is given in a fuzzy set theoretical context.…
Descriptors: Evaluation Criteria, Indexing, Information Retrieval, Information Science
Peer reviewed Peer reviewed
Metz, Paul; Potter, William G. – Journal of Academic Librarianship, 1989
The first of two articles discusses the advantages of online subject searching, the recall and precision tradeoff, and possible future developments in electronic searching. The second reviews the experiences of academic libraries that offer online searching of bibliographic, full text, and statistical databases in addition to online catalogs. (CLB)
Descriptors: Academic Libraries, Artificial Intelligence, Databases, Higher Education
Peer reviewed Peer reviewed
Can, Fazli – Information Processing and Management, 1994
Discussion of relevancy in information retrieval systems focuses on an analysis of the efficiency of various cluster-based retrieval (CBR) strategies. A method for combining CBR and inverted index search is proposed that is cost effective in terms of time efficiency; and results of experiments are reported. (Contains 32 references.) (LRW)
Descriptors: Algorithms, Cluster Grouping, Comparative Analysis, Cost Effectiveness
Peer reviewed Peer reviewed
Tonta, Yasar – Public-Access Computer Systems Review, 1992
Discusses the concept of search failure in document retrieval systems and three effectiveness measures, precision, recall, and "fallout." Four research methods--retrieval effectiveness measures, user satisfaction measures, transaction log analysis, and critical incident technique--are examined, and findings of major studies using each of the…
Descriptors: Evaluation Methods, Information Retrieval, Information Systems, Online Searching
Peer reviewed Peer reviewed
Brooks, Terrence A. – Journal of the American Society for Information Science, 1995
Describes two experiments that investigated the influence of textual factors on the perception of bibliographic records. Topics include index descriptors; term overlap; extent of semantic distance; direction of semantic distance; generic trees of descriptors; relevance perception; and implications for the design of information retrieval…
Descriptors: Bibliographic Records, Computer System Design, Hypothesis Testing, Information Retrieval
Peer reviewed Peer reviewed
Spink, Amanda – Information Processing & Management, 1995
This study uses the human approach to examine the sources and effectiveness of search terms selected during 40 mediated interactive database searches and focuses on determining the retrieval effectiveness of search terms identified by users and intermediaries from retrieved items during term relevance feedback. (Author/JKP)
Descriptors: Computer Interfaces, Computer System Design, Information Retrieval, Online Searching
Peer reviewed Peer reviewed
Hersh, William R.; Hickam, David H. – Journal of the American Society for Information Science, 1995
Describes a study conducted at Oregon Health Sciences University that compared the use of three retrieval systems by medical students: a Boolean system, a word-based natural language system, and a concept-based natural language system. Results showed no statistically significant differences in recall, precision, or user preferences. (Author/LRW)
Descriptors: Comparative Analysis, Evaluation Methods, Higher Education, Information Retrieval
Peer reviewed Peer reviewed
Trahan, Eric – Library Quarterly, 1993
Describes and discusses metanalysis and criticisms of the methodology. Reports on a pilot study which tested the feasibility of metanalytic methods in library science research using the literature on paper- or computer-based information retrieval. (28 references) (EA)
Descriptors: Bibliographies, Effect Size, Information Retrieval, Library Research
Nickerson, Gord – Computers in Libraries, 1992
Outlines the features and drawbacks of Wide Area Information Servers (WAIS). Praises the ease of use for end users and the variety of documents and other information available, but cautions users about searching problems resulting from the use of relevance ranking schemes. Calls for greater librarian involvement. (EA)
Descriptors: Bibliographic Databases, Computer System Design, Full Text Databases, Information Networks
Warner, Julian – Proceedings of the ASIS Annual Meeting, 1992
Discusses an experiment with the "Thesaurus of ERIC Descriptors" which aimed to adapt established criteria for the evaluation of information retrieval systems to the assessment of searcher performance in online bibliographic searching. The concept of relevance is discussed, and the Cranfield paradigm for the evaluation of information…
Descriptors: Bibliographic Databases, Evaluation Criteria, Evaluation Methods, Information Retrieval
Peer reviewed Peer reviewed
Wilbur, W. John – Journal of the American Society for Information Science, 1993
Presents a method of modeling the relevance relationship in information retrieval to answer the question of the theoretical limits of certain statistical methods. Hypergeometric probability distribution is used to construct an abstract model of a database of MEDLINE records, and results of tests of vector retrieval methods are reported. (28…
Descriptors: Automatic Indexing, Bayesian Statistics, Bibliographic Databases, Expert Systems
Peer reviewed Peer reviewed
Meadow, Charles T. – Canadian Journal of Information and Library Science, 1996
Proposes a method of evaluating information retrieval systems by concentrating on individual tools (commands, their menus or graphic interface equivalents, or a move/stratagem). A user would assess the relative success of a small part of a search, and every tool used in that part would be credited with a contribution to the result. Cumulative…
Descriptors: Computer Interfaces, Computer System Design, Evaluation Criteria, Evaluation Methods
Marino, Nancy Robinson – Library Talk, 1998
Discusses the use of webliographies, collections of Internet sites on a particular subject, to help students find relevant and useful sources on the World Wide Web. Highlights include the validity of information sources; developing criteria for evaluating information; how to set up a webliography Web page; and HTML commands. (LRW)
Descriptors: Computer Uses in Education, Elementary Education, Evaluation Criteria, Information Retrieval
Peer reviewed Peer reviewed
Losee, Robert M. – Journal of the American Society for Information Science, 1997
Suggests a method that allows searchers to analytically compare the Boolean and probabilistic information retrieval approaches. Sample performance figures are provided for queries using the Boolean strategy, and for probabilistic systems. The variation of performance across sublanguages and queries is examined, as well as the performance of models…
Descriptors: Comparative Analysis, Data Analysis, Databases, Information Retrieval
Pages: 1  |  ...  |  147  |  148  |  149  |  150  |  151  |  152  |  153  |  154  |  155  |  ...  |  603