From the README: Rels is a program that determines the relevance of text documents to a set of keywords expressed in boolean infix notation. The relevance is determined by comparing the phonetic representation of the keywords with the phonetic representation of every word in a document. (Phonetic searching has some degree of tolerance to misspelled words.) The list of file names that are relevant are printed to the standard output, in order of relevance. For example, the command: rel "(directory & listing)" /usr/share/man/cat1 (ie., find the relevance of all files that contain both of the words "directory" and "listing" in the catman directory) will list 21 files, out of the 782 catman files, (totaling 6.8 MB,) of which "ls.1" is the fifth most relevant-meaning that to find the command that lists directories in a Unix system, the "literature search" was cut, on average, from 359 to 5 files, or a reduction of approximately 98%. The command took 55 seconds to execute on a on a System V, rel. 4.2 machine, (20Mhz 386 with an 18ms. ESDI drive,) which is a considerable expediency in relation to browsing through the files in the directory since ls.1 is the 359'th file in the directory. Although this example is remedial, a similar expediency can be demonstrated in searching for documents in email repositories and text archives. Additional applications include information robots, (ie., "mailbots," or "infobots,") where the disposition (ie., delivery, filing, or viewing,) of text documents can be determined dynamically, based on the relevance of the document to a set of criteria, framed in boolean infix notation. Or, in other words, the program can be used to order, or rank, text documents based on a "context," specified in a general mathematical language, similar to that used in calculators. The words in the query are case insensitive, and either upper or lower case can be used. Associativity of operators is left to right, and the precedence of operators is identical to 'C': precedence operator high ! = not middle & = and lowest | = or The operator symbols can be escaped with the "\\" character to include the symbol in a search pattern. The "escape space" character sequence represents one or more instances of space character(s) in search patterns, and each instance will match one or more consecutive whitespace characters, (as defined by isspace(3) in ctype.h and/or locale.h,) and allows phrases to be searched for. The "many to one" whitespace character translation occurs in both the keyword arguments and the text document(s). Multiple consecutive instances of the "escape space" character sequence in keyword search phrases should not be used, and single instances are appropriate only when necessary to specify a consecutive sequence of keywords-the logical and operator is the preferred searching construct when searching documents that contain set(s) of keywords. Note that the logical or operator, (|) is useful in conjunction with a thesaurus. For example, the thesaurus entry for the word "complexity" is: Complexity. -- N. complexity; complexness &c. adj.; complexus; complication, implication; intricacy, intrication; perplexity; network, labyrinth; wilderness, jungle; involution, raveling, entanglement; coil &c. (convolution) 248; sleave, tangled skein, knot, Gordian knot, wheels within wheels; kink, knarl; webwork. Adj. knarled. complex, complexed; intricate, complicated, perplexed, involved, raveled, entangled, knotted, tangled, inextricable; irreducible. implying that a reasonable context for a search for things that are complex would be: rels '(complex | complic | implicat | intric | perplex | labyrinth | involut | convolut | involv | tangl | inextric | irreduc)' ... which would probably return too many document names. The number of documents can be reduced with the logical and (&) and not (!) operator in an iterative fashion to reject documents of little interest. Document format issues: Hyphenation issues are addressed by deleting hyphens and any following sequence of instances of whitespace characters, (as defined by isspace(3),) in both the keyword arguments and the text document(s). Backspace character issues are addressed by overwriting the character before the backspace with the character after the backspace, which will instantiate the character of the last instance of of consecutive backspace/character combinations. This is specifically for catman pages which utilize underscore/backspace/character combinations for underlining, in addition to backspace/character combinations for bold (overstrike,) representation-note that for this process to be successful, a single underscore (used for underlining,) must preceed a single character in the sequence. Phonetic translation: This program is a derivative works based on the rel(1) program, available from sunsite.unc.edu in /pub/Linux/utils/text/rel-1.3.tar.gz. The sources were modified to include a soundex search algorithm. The soundex algorithm is a mechanical phonetic translation system for the English language, and converts English words into a corresponding phonetic code for the word. The algorithm is as follows: for each character in a word: if the character is the first character of a word 1) do nothing else 2) replace consecutive sequences of the labials, (ie., the characters, B, F, P, V,) with the character '1' 3) replace consecutive sequences of the gutterals and sibilants, (ie., the characters, C, G, J, K, Q, S, X, Z,) with the character '2' 4) replace consecutive sequences of the dentals, (ie., the characters, D, T,) with the character '3' 5) replace consecutive sequences of the longliquids, (ie., the character, L,) with the character '4' 6) replace consecutive sequences of the nasals, (ie., the characters, M, N,) with the character '5' 7) replace consecutive sequences of the shortliquids, (ie., the character, R,) with the character '6' 8) and, omit all other characters, (ie., the characters, A, E, H, I, O, U, W, Y,) 9) if the soundex translation of the word is larger than 4 characters, truncate to 4 characters. For example, the soundex translation of the word "conover" is C516. Unfortunately, there are two related issues in using the soundex algorithm as a search mechanism; interior keyword search is impossible, and, there is no practical strategy to handle hyphenation. As a heuristic, simply eliminating 1), above, would permit interior keyword searches and hyphenation through concatenation of characters on each side of a '-' character, at the expense of erroneous matches. In practice, the expense is small-depending on the point of view-particularly if the requirement in 9), above, is removed, permitting soundex keyword translations of more syllables. Note that this heuristic returns soundex translated words that consist only of numbers. Since numerical data can be a valid search criteria, the ambiguity can be avoided by using letters from the alphabet for the numbers, making the algorithm as follows: 1) replace consecutive sequences of the labials, (ie., the characters, B, F, P, V,) with the character 'B' 2) replace consecutive sequences of the gutterals and sibilants, (ie., the characters, C, G, J, K, Q, S, X, Z,) with the character 'G' 3) replace consecutive sequences of the dentals, (ie., the characters, D, T,) with the character 'D' 4) replace consecutive sequences of the longliquids, (ie., the character, L,) with the character 'L' 5) replace consecutive sequences of the nasals, (ie., the characters, M, N,) with the character 'N' 6) replace consecutive sequences of the shortliquids, (ie., the character, R,) with the character 'S' 7) and, omit all other characters, (ie., the characters, A, E, H, I, O, U, W, Y,) which turns out to be implementable as a direct, many-to-one, and on-to simple character mapping. It is, also, a very fast phonetic search methodology-there is no speed penalty. Comparing the two methodologies, (standard soundex vs. modified soundex,) on a text version of the Webster's dictionary, (mine has 234,932 words,) as to the number of different words recognized, with both unlimited soundex word length, and a word length of 4: standard soundex modified soundex length = 4 unlimited length = 4 unlimited 4,335 61,408 932 31,983 Although the modified soundex with unlimited length is inferior to the standard soundex with unlimited word length in capability of recognizing differences in words, it is superior to the standard soundex with a word length of 4, which is the way the algorithm is usually used. It would seem that the modified soundex algorithm is a reasonable, (depending on the point of view,) compromise for implementing a phonetic search algorithm. There are additional issues with the soundex algorithm for phonetic keyword searches: 1) it only works for the English language 2) a syntax error will be returned for keywords made up of ONLY the characters A, E, I, H, O, U, W, and Y, (there is nothing to search for-these characters are ignored by the soundex algorithm) 3) Extreme care must be exercised when using the algorithm to reject documents with the logical not operator (!) since it will reject more documents than probably expected. meaning that the algorithm should be considered as an adjunct to, instead of a replacement for, a strict keyword search. Tests on large email archives, and the HTML pages from WWW servers (each about 15 Mbytes,) tend to indicate that, in practice, the algorithm returns not quite twice as many keyword matches as a strict keyword search. (The output of this program was compared to the output of the rel(1) program.) General description of the program: This program is an experiment to evaluate using infix boolean operations as a heuristic to determine the relevance of text files in electronic literature searches. The operators supported are, "&" for logical "and," "|" for logical "or," and "!" for logical "not." Parenthesis are used as grouping operators, and "partial key" searches are fully supported, (meaning that the words can be abbreviated.) For example, the command: rels "(((these & those) | (them & us)) ! we)" file1 file2 ... would print a list of filenames that contain either the words "these" and "those", or "them" and "us", but doesn't contain the word "we" from the list of filenames, file1, file2, ... The order of the printed file names is in order of relevance, where relevance is determined by the number of incidences of the words "these", "those", "them", and "us", in each file. The general concept is to "narrow down" the number of files to be browsed when doing electronic literature searches for specific words and phrases in a group of files using a command similar to: more `rels "(((these & those) | (them & us)) ! we)" file1 file2` Although regular expressions were supported in the prototype versions of the program, the capability was removed in the release versions for reasons of syntactical formality, for example, the command: rels "((john & conover) & (joh.*over))" files has a logical contradiction since the first group specifies all files which contain "john" any place and "conover" anyplace in files, and the second grouping specifies all files that contain "john" followed by "conover". If the last group of operators takes precedence, the first is redundant. Additionally, it is not clear whether wild card expressions should span the scope multiple records in a literature search, (which the first group of operators in this example does,) or exactly what a wild card expression that spans multiple records means, ie., how many records are to be spanned, without writing a string of EOL's in the infix expression. Since the two groups of operators in this example are very close, operationally, (at least for practical purposes,) it was decided that support of regular expressions should be abandoned, and such operations left to the grep(1) suite. Applicability: Applicability of rels varies on complexity of search, size of database, speed of host environment, etc., however, as some general guidelines: 1) For text files with a total size of less than 5 MB, rels, and standard egrep(1) queries of the text files will probably prove adequate. 2) For text files with a total size of 5 MB to 50 MB, qt seems adequate for most queries. The significant issue is that, although the retrieval execution times are probably adequate with qt, the database write times are not impressive. Qt is listed in "Related information retrieval software:," below. 3) For text files with a total size that is larger than 50 MB, or where concurrency is an issue, it would be appropriate to consider one of the other alternatives listed in "Related information retrieval software:," below. Extensibility: The source was written with extensibility as an issue. To alter character transliterations, see uppercase.c for details. For enhancements to phrase searching and hyphenation suggestions, see translit.c. It is possible to "weight" the relevance determination of documents that are composed in one of the standardized general markup languages, like TeX/LaTeX, or SGML. The "weight" of the relevance of search matches depends on where the words are found in the structure of the document, for example, if the search was for "numerical" and "methods," \chapter{Numerical Methods} would be weighted "stronger" than if the words were found in \section{Numerical Methods}, which in turn would be weighted "stronger" than if the words were found in a paragraph. This would permit relevance of a document to be determined by how author structured the document. See eval.c for suggestions. The list of identifiers in the search argument can be printed to stdio, possibly preceeded by a '+' character and separated by '|' characters to make an egrep(1) compatible search argument, which could, conceivably, be used as the search argument in a browser so that something like: "browse `rels arg directory'" would automatically search the directory for arg, load the files into the browser, and skip to the first instance of an identifier, with one button scanning to the next instance, and so on. See postfix.c for suggestions. The source architecture is very modularized to facilitate adapting the program to different environments and applications, for example, a "mailbot" can be constructed by eliminating searchpath.c, and constructing a list of postfix stacks, with perhaps an email address element added to each postfix stack, in such a manner that the program could be used to scan incoming mail, and if the mail was relevant to any postfix criteria, it would be forwarded to the recipient. The program is capable of running as a wide area, distributed, full text information retrieval system. A possible scenario would be to distribute a large database in many systems that are internetworked together, presumably via the Unix inet facility, with each system running a copy of the program. Queries would be submitted to the systems, and the systems would return individual records containing the count of matches to the query, and the file name containing the matches, perhaps with the machine name, in such a manner that the records could be sorted on the "count field," and a network wide "browser" could be used to view the documents, or a script could be made to use the "r suite" to transfer the documents into the local machine. Obviously, the queries would be run in parallel on the machines in the network-concurrency would not be an issue. See the function, main(), below, for suggestions. References: 1) "Information Retrieval, Data Structures & Algorithms," William B. Frakes, Ricardo Baeza-Yates, Editors, Prentice Hall, Englewood Cliffs, New Jersey 07632, 1992, ISBN 0-13-463837-9. The sources for the many of the algorithms presented in 1) are available by ftp, ftp.vt.edu:/pub/reuse/ircode.tar.Z 2) "Text Information Retrieval Systems," Charles T. Meadow, Academic Press, Inc, San Diego, 1992, ISBN 0-12-487410-X. 3) "Full Text Databases," Carol Tenopir, Jung Soon Ro, Greenwood Press, New York, 1990, ISBN 0-313-26303-5. 4) "Text and Context, Document Processing and Storage," Susan Jones, Springer-Verlag, New York, 1991, ISBN 0-387-19604-8. 5) ftp think.com:/wais/wais-corporate-paper.text 6) ftp cs.toronto.edu:/pub/lq-text.README.1.10 Related information retrieval software: 1) Wais, available by ftp, think.com:/wais/wais-8-b5.1.tar.Z. 2) Lq-text, available by ftp, cs.toronto.edu:/pub/lq-text1.10.tar.Z. 3) Qt, available by ftp, ftp.uu.net:/usenet/comp.sources/unix/volume27. john@johncon.com (John Conover) Campbell, California, USA February, 1998