NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 17 results Save | Export
Peer reviewed Peer reviewed
Robertson, S. E. – Journal of Documentation, 1977
The principle that, for optimal retrieval, documents should be ranked in order of the probability of relevance is discussed. (Author/KP)
Descriptors: Information Retrieval, Probability, Relevance (Information Retrieval)
Peer reviewed Peer reviewed
Robertson, S. E. – Journal of Documentation, 1974
Poses questions concerning methods of using specificity information in a weighting scheme. (JB)
Descriptors: Documentation, Information Retrieval, Mathematical Models, Relevance (Information Retrieval)
Peer reviewed Peer reviewed
Robertson, S. E.; Belkin, N. J. – Journal of Documentation, 1978
Explores the possibility of combining the two principles of ranking according to degree of relevance, and ranking according to probability of relevance, for information retrieval systems, but concludes that, while neither is adequate alone, no single, all-embracing ranking principle can be constructed to replace the two. (Author/VT)
Descriptors: Information Retrieval, Information Systems, Relevance (Information Retrieval), Search Strategies
Peer reviewed Peer reviewed
Robertson, S. E.; Sparck Jones, K. – Journal of the American Society for Information Science, 1976
Examines statistical techniques for exploiting relevance information to weight search terms. These techniques are presented as a natural extension of weighting methods using information about the distribution of index terms in documents in general. (Author)
Descriptors: Indexing, Information Retrieval, Probability, Relevance (Information Retrieval)
Peer reviewed Peer reviewed
Robertson, S. E.; Teather, D. – Journal of Documentation, 1974
A model is proposed to explain the retrieval characteristics of an information retrieval system. (Author)
Descriptors: Bayesian Statistics, Information Retrieval, Information Systems, Mathematical Models
Peer reviewed Peer reviewed
Robertson, S. E.; Harding, P. – Journal of Documentation, 1984
Presents adaptation of a probabilistic theoretical model previously used in relevance feedback for use in automatic indexing of documents (in the sense of imitating) human indexers. Methods for model application are proposed, independence assumptions used in the model are interpreted, and the probability of a dependence model is discussed.…
Descriptors: Automatic Indexing, Classification, Information Retrieval, Mathematical Models
Peer reviewed Peer reviewed
Jones, K. Sparck; Walker, S.; Robertson, S. E. – Information Processing & Management, 2000
This two-part article combines a comprehensive account of a probabilistic model of retrieval with new systematic experiments on TREC (Text Retrieval Conferences) Program material. Part 1 covers the foundations and the model development for document collection and relevance data, along with the test apparatus. Data and results tables for both parts…
Descriptors: Data Analysis, Data Collection, Information Management, Information Retrieval
Peer reviewed Peer reviewed
Jones, K. Sparck; Walker, S.; Robertson, S. E. – Information Processing & Management, 2000
A comprehensive account of a probabilistic model of retrieval with new systematic experiments on TREC (Text Retrieval Conferences) Program material. Part 2 covers the further development of the model, with testing, and briefly considers other environment conditions and tasks, model training, concluding with comparisons with other approaches and an…
Descriptors: Comparative Analysis, Data Analysis, Data Collection, Information Management
Peer reviewed Peer reviewed
Robertson, S. E. – Journal of Documentation, 1986
A Bayesian argument is used to suggest modifications to the Robertson and Jones relevance weighting formula to accommodate the addition to the query of terms taken from the relevant documents identified during the search. (Author)
Descriptors: Bayesian Statistics, Information Retrieval, Mathematical Formulas, Online Systems
Peer reviewed Peer reviewed
Robertson, S. E. – Journal of Documentation, 1990
Discusses term weighting formulae and their use for selecting new terms to enhance a search statement and for weighting the terms for retrieval purposes once selected. The Swets model of information retrieval system performance is described, an approach to term selection is presented, and future research is suggested. (five references) (LRW)
Descriptors: Information Retrieval, Mathematical Formulas, Models, Relevance (Information Retrieval)
Peer reviewed Peer reviewed
Robertson, S. E.; Beaulieu, M. – Journal of Documentation, 1997
Reviews the development of research and evaluation in information retrieval. Highlights include the Cranfield projects concerning recall and precision, the Medlars experiment that focused on relevance, TREC (Text Retrieval Conference), evaluation versus understanding, interface versus functionality, document length, and terms and user interaction.…
Descriptors: Computer Interfaces, Conferences, Evaluation Methods, Information Retrieval
Peer reviewed Peer reviewed
Huang, Xiangji; Robertson, S. E. – Journal of Documentation, 1997
Discusses the use of text retrieval methods based on probabilistic models with Chinese language material, which are modeled on the Okapi information retrieval system. Topics include system architecture, test collections, weighting functions, and algorithms. (Author/LRW)
Descriptors: Algorithms, Chinese, Computer System Design, Information Retrieval
Peer reviewed Peer reviewed
Robertson, S. E.; Hancock-Beaulieu, M. M. – Information Processing and Management, 1992
The increasing complexity of evaluating information retrieval (IR) systems is discussed in terms of contributing factors and research methods. Examples are provided from experiments with weighted searching on a front-end system, information-seeking behavior and online catalogs, and the OKAPI experimental retrieval system. (21 references) (LAE)
Descriptors: Cognitive Processes, Databases, Evaluation Methods, Experiments
Peer reviewed Peer reviewed
Robertson, S. E.; Walker, S.; Beaulieu, M. – Information Processing & Management, 2000
Describes use of the Okapi system, an experimental text retrieval system at City University, London, on TREC (Text Retrieval Conference) test collections to investigate probabilistic models and weighting functions; automatic ad hoc experiments; and routing and filtering for query optimization. Discusses interactive system evaluation. (LRW)
Descriptors: Conferences, Evaluation Methods, Foreign Countries, Higher Education
Peer reviewed Peer reviewed
Robertson, S. E. – Information Processing and Management, 1990
Discusses the problem of determining an adequate sample size for an information retrieval experiment comparing two systems on separate samples of requests. The application of statistical methods to information retrieval experiments is discussed, the Mann-Whitney U Test is used for determining minimum sample sizes, and variables and distributions…
Descriptors: Comparative Analysis, Information Retrieval, Measurement Techniques, Predictor Variables
Previous Page | Next Page ยป
Pages: 1  |  2