+ class="PARAMETER">query. For example:
+
+SELECT ts_rewrite('a & b'::tsquery, 'a'::tsquery, 'c'::tsquery);
+ ts_rewrite
+------------
+ 'b' & 'c'
+
+
+
+
+
+
+
+
+
+ ts_rewrite(ARRAY[query tsquery>, target tsquery>, substitute tsquery>]) returns tsquery>
+
+
+
+
+ Aggregate form. XXX if we choose not to remove this, it needs to
+ be documented better. Note it is not listed in
+ textsearch-functions-table at the moment.
+
+CREATE TABLE aliases (t tsquery PRIMARY KEY, s tsquery);
+INSERT INTO aliases VALUES('a', 'c');
+
+SELECT ts_rewrite(ARRAY['a & b'::tsquery, t,s]) FROM aliases;
+ ts_rewrite
+------------
+ 'b' & 'c'
+
+
+
+
+
+
+
+
+
+ ts_rewrite (query> tsquery>, select> text>) returns tsquery>
+
+
+
+
+ This form of ts_rewrite> accepts a starting
+ query> and a SQL select> command, which
+ is given as a text string. The select> must yield two
+ columns of tsquery> type. For each row of the
+ select> result, occurrences of the first column value
+ (the target) are replaced by the second column value (the substitute)
+ within the current query> value. For example:
+
+CREATE TABLE aliases (t tsquery PRIMARY KEY, s tsquery);
+INSERT INTO aliases VALUES('a', 'c');
+
+SELECT ts_rewrite('a & b'::tsquery, 'SELECT t,s FROM aliases');
+ ts_rewrite
+------------
+ 'b' & 'c'
+
+
+
+ Note that when multiple rewrite rules are applied in this way,
+ the order of application can be important; so in practice you will
+ want the source query to ORDER BY> some ordering key.
+
+
+
+
+
+
+ Let's consider a real-life astronomical example. We'll expand query
+ supernovae using table-driven rewriting rules:
+
+CREATE TABLE aliases (t tsquery primary key, s tsquery);
+INSERT INTO aliases VALUES(to_tsquery('supernovae'), to_tsquery('supernovae|sn'));
+
+SELECT ts_rewrite(to_tsquery('supernovae & crab'), 'SELECT * FROM aliases');
+ ts_rewrite
+---------------------------------
+ 'crab' & ( 'supernova' | 'sn' )
+
+
+ We can change the rewriting rules just by updating the table:
+
+UPDATE aliases SET s = to_tsquery('supernovae|sn & !nebulae') WHERE t = to_tsquery('supernovae');
+
+SELECT ts_rewrite(to_tsquery('supernovae & crab'), 'SELECT * FROM aliases');
+ ts_rewrite
+---------------------------------------------
+ 'crab' & ( 'supernova' | 'sn' & !'nebula' )
+
+
+
+ Rewriting can be slow when there are many rewriting rules, since it
+ checks every rule for a possible hit. To filter out obvious non-candidate
+ rules we can use the containment operators for the tsquery
+ type. In the example below, we select only those rules which might match
+ the original query:
+
+SELECT ts_rewrite('a & b'::tsquery,
+ 'SELECT t,s FROM aliases WHERE ''a & b''::tsquery @> t');
+ ts_rewrite
+------------
+ 'b' & 'c'
+
+
+
+
+
+
+
+
+
Triggers for Automatic Updates
+
+
+ for updating a derived tsvector column
+
+
+ When using a separate column to store the tsvector> representation
+ of your documents, it is necessary to create a trigger to update the
+ tsvector> column when the document content columns change.
+ Two built-in trigger functions are available for this, or you can write
+ your own.
+
+
+
+ tsvector_update_trigger(tsvector_column_name, config_name, text_column_name , ... )
+ tsvector_update_trigger_column(tsvector_column_name, config_column_name, text_column_name , ... )
+
+
+ These trigger functions automatically compute a tsvector>
+ column from one or more textual columns, under the control of
+ parameters specified in the CREATE TRIGGER> command.
+ An example of their use is:
+
+CREATE TABLE messages (
+ title text,
+ body text,
+ tsv tsvector
+);
+
+CREATE TRIGGER tsvectorupdate BEFORE INSERT OR UPDATE
+ON messages FOR EACH ROW EXECUTE PROCEDURE
+tsvector_update_trigger(tsv, 'pg_catalog.english', title, body);
+
+INSERT INTO messages VALUES('title here', 'the body text is here');
+
+SELECT * FROM messages;
+ title | body | tsv
+------------+-----------------------+----------------------------
+ title here | the body text is here | 'bodi':4 'text':5 'titl':1
+
+SELECT title, body FROM messages WHERE tsv @@ to_tsquery('title & body');
+ title | body
+------------+-----------------------
+ title here | the body text is here
+
+
+ Having created this trigger, any change in title> or
+ body> will automatically be reflected into
+ tsv>, without the application having to worry about it.
+
+
+ The first trigger argument must be the name of the tsvector>
+ column to be updated. The second argument specifies the text search
+ configuration to be used to perform the conversion. For
+ tsvector_update_trigger>, the configuration name is simply
+ given as the second trigger argument. It must be schema-qualified as
+ shown above, so that the trigger behavior will not change with changes
+ in search_path>. For
+ tsvector_update_trigger_column>, the second trigger argument
+ is the name of another table column, which must be of type
+ regconfig>. This allows a per-row selection of configuration
+ to be made. The remaining argument(s) are the names of textual columns
+ (of type text>, varchar>, or char>). These
+ will be included in the document in the order given. NULL values will
+ be skipped (but the other columns will still be indexed).
+
+
+ A limitation of the built-in triggers is that they treat all the
+ input columns alike. To process columns differently — for
+ example, to weight title differently from body — it is necessary
+ to write a custom trigger. Here is an example using
+
PL/pgSQL as the trigger language:
+
+CREATE FUNCTION messages_trigger() RETURNS trigger AS $$
+begin
+ new.tsv :=
+ setweight(to_tsvector('pg_catalog.english', coalesce(new.title,'')), 'A') ||
+ setweight(to_tsvector('pg_catalog.english', coalesce(new.body,'')), 'D');
+ return new;
+end
+$$ LANGUAGE plpgsql;
+
+CREATE TRIGGER tsvectorupdate BEFORE INSERT OR UPDATE
+ON messages FOR EACH ROW EXECUTE PROCEDURE messages_trigger();
+
+
+
+ Keep in mind that it is important to specify the configuration name
+ explicitly when creating tsvector> values inside triggers,
+ so that the column's contents will not be affected by changes to
+ default_text_search_config>. Failure to do this is likely to
+ lead to problems such as search results changing after a dump and reload.
+
+
+
+
+
+
Gathering Document Statistics
+
+
+
+
+ The function ts_stat> is useful for checking your
+ configuration and for finding stop-word candidates.
+
+
+
+ ts_stat(sqlquery text>, weights text>, OUT word text>, OUT ndoc integer>, OUT nentry integer>) returns setof record>
+
+
+ sqlquery is a text value containing a SQL
+ query which must return a single tsvector column.
+ ts_stat> executes the query and returns statistics about
+ each distinct lexeme (word) contained in the tsvector
+ data. The columns returned are
+
+
+
+ word> text> — the value of a lexeme
+
+
+
+ ndoc> integer> — number of documents
+ (tsvector>s) the word occurred in
+
+
+
+ nentry> integer> — total number of
+ occurrences of the word
+
+
+
+
+ If weights is supplied, only occurrences
+ having one of those weights are counted.
+
+
+ For example, to find the ten most frequent words in a document collection:
+
+SELECT * FROM ts_stat('SELECT vector FROM apod')
+ORDER BY nentry DESC, ndoc DESC, word
+LIMIT 10;
+
+
+ The same, but counting only word occurrences with weight A>
+ or B>:
+
+SELECT * FROM ts_stat('SELECT vector FROM apod', 'ab')
+ORDER BY nentry DESC, ndoc DESC, word
+LIMIT 10;
+
+
+
+
+
+
+
+
+
Parsers
+
+ Text search parsers are responsible for splitting raw document text
+ into tokens> and identifying each token's type, where
+ the set of possible types is defined by the parser itself.
+ Note that a parser does not modify the text at all — it simply
+ identifies plausible word boundaries. Because of this limited scope,
+ there is less need for application-specific custom parsers than there is
+ for custom dictionaries. At present
PostgreSQL
+ provides just one built-in parser, which has been found to be useful for a
+ wide range of applications.
+
+
+ The built-in parser is named pg_catalog.default>.
+ It recognizes 23 token types:
+
+
+
+
Default Parser's Token Types
+
+
+ |
+ Alias
+ Description
+ Example
+
+
+
+ |
+ lword
+ Latin word (only ASCII letters)
+ foo
+
+ |
+ nlword
+ Non-latin word (only non-ASCII letters)
+
+
+ |
+ word
+ Word (other cases)
+ beta1
+
+ |
+ lhword
+ Latin hyphenated word
+ foo-bar
+
+ |
+ nlhword
+ Non-latin hyphenated word
+
+
+ |
+ hword
+ Hyphenated word
+ foo-beta1
+
+ |
+ lpart_hword
+ Latin part of hyphenated word
+ foo or bar in the context
+ foo-bar>
+
+ |
+ nlpart_hword
+ Non-latin part of hyphenated word
+
+
+ |
+ part_hword
+ Part of hyphenated word
+ beta1 in the context
+ foo-beta1>
+
+ |
+ email
+ Email address
+
+ |
+ protocol
+ Protocol head
+ http://
+
+ |
+ url
+ URL
+ foo.com/stuff/index.html
+
+ |
+ host
+ Host
+ foo.com
+
+ |
+ uri
+ URI
+ /stuff/index.html, in the context of a URL
+
+ |
+ file
+ File or path name
+ /usr/local/foo.txt, if not within a URL
+
+ |
+ sfloat
+ Scientific notation
+ -1.234e56
+
+ |
+ float
+ Decimal notation
+ -1.234
+
+ |
+ int
+ Signed integer
+ -1234
+
+ |
+ uint
+ Unsigned integer
+ 1234
+
+ |
+ version
+ Version number
+ 8.3.0
+
+ |
+ tag
+ HTML Tag
+ <A HREF="dictionaries.html">
+
+ |
+ entity
+ HTML Entity
+ &
+
+ |
+ blank
+ Space symbols
+ (any whitespace or punctuation not otherwise recognized)
+
+
+
+
+
+ It is possible for the parser to produce overlapping tokens from the same
+ piece of text. As an example, a hyphenated word will be reported both
+ as the entire word and as each component:
+
+SELECT "Alias", "Description", "Token" FROM ts_debug('foo-bar-beta1');
+ Alias | Description | Token
+-------------+-------------------------------+---------------
+ hword | Hyphenated word | foo-bar-beta1
+ lpart_hword | Latin part of hyphenated word | foo
+ blank | Space symbols | -
+ lpart_hword | Latin part of hyphenated word | bar
+ blank | Space symbols | -
+ part_hword | Part of hyphenated word | beta1
+
+
+ This behavior is desirable since it allows searches to work for both
+ the whole compound word and for components. Here is another
+ instructive example:
+
+SELECT "Alias", "Description", "Token" FROM ts_debug('http://foo.com/stuff/index.html');
+ Alias | Description | Token
+----------+---------------+--------------------------
+ protocol | Protocol head | http://
+ url | URL | foo.com/stuff/index.html
+ host | Host | foo.com
+ uri | URI | /stuff/index.html
+
+
+
+
+
+
+
Dictionaries
+
+ Dictionaries are used to eliminate words that should not be considered in a
+ search (stop words>), and to normalize> words so
+ that different derived forms of the same word will match. A successfully
+ normalized word is called a lexeme>. Aside from
+ improving search quality, normalization and removal of stop words reduce the
+ size of the tsvector representation of a document, thereby
+ improving performance. Normalization does not always have linguistic meaning
+ and usually depends on application semantics.
+
+
+ Some examples of normalization:
+
+
+
+
+ Linguistic - ispell dictionaries try to reduce input words to a
+ normalized form; stemmer dictionaries remove word endings
+
+
+
+
URL locations can be canonicalized to make
+ equivalent URLs match:
+
+
+
+ http://www.pgsql.ru/db/mw/index.html
+
+
+
+ http://www.pgsql.ru/db/mw/
+
+
+
+ http://www.pgsql.ru/db/../db/mw/index.html
-
+
+
+ A dictionary is a program that accepts a token as
+ input and returns:
+
+
+ an array of lexemes if the input token is known to the dictionary
+ (notice that one token can produce more than one lexeme)
+
+
+
+ an empty array if the dictionary knows the token, but it is a stop word
+
+
+
+ NULL if the dictionary does not recognize the input token
+
+
+
+
+
+
PostgreSQL provides predefined dictionaries for
+ many languages. There are also several predefined templates that can be
+ used to create new dictionaries with custom parameters. Each predefined
+ dictionary template is described below. If no existing
+ template is suitable, it is possible to create new ones; see the
+
contrib/> area of the PostgreSQL> distribution
+ for examples.
+
+
+ A text search configuration binds a parser together with a set of
+ dictionaries to process the parser's output tokens. For each token
+ type that the parser can return, a separate list of dictionaries is
+ specified by the configuration. When a token of that type is found
+ by the parser, each dictionary in the list is consulted in turn,
+ until some dictionary recognizes it as a known word. If it is identified
+ as a stop word, or if no dictionary recognizes the token, it will be
+ discarded and not indexed or searched for.
+ The general rule for configuring a list of dictionaries
+ is to place first the most narrow, most specific dictionary, then the more
+ general dictionaries, finishing with a very general dictionary, like
+ a
Snowball> stemmer or simple>, which
+ recognizes everything. For example, for an astronomy-specific search
+ (astro_en configuration) one could bind token type
+ lword (Latin word) to a synonym dictionary of astronomical
+ terms, a general English dictionary and a
Snowball> English
+ stemmer:
+
+ALTER TEXT SEARCH CONFIGURATION astro_en
+ ADD MAPPING FOR lword WITH astrosyn, english_ispell, english_stem;
+
+
+
+
+
Stop Words
+
+ Stop words are words that are very common, appear in almost every
+ document, and have no discrimination value. Therefore, they can be ignored
+ in the context of full text searching. For example, every English text
+ contains words like a and the>, so it is
+ useless to store them in an index. However, stop words do affect the
+ positions in tsvector, which in turn affect ranking:
+
+SELECT to_tsvector('english','in the list of stop words');
+ to_tsvector
+----------------------------
+ 'list':3 'stop':5 'word':6
+
+
+ The mising positions 1,2,4 are because of stop words. Ranks
+ calculated for documents with and without stop words are quite different:
+
+SELECT ts_rank_cd (to_tsvector('english','in the list of stop words'), to_tsquery('list & stop'));
+ ts_rank_cd
+------------
+ 0.05
+
+SELECT ts_rank_cd (to_tsvector('english','list stop words'), to_tsquery('list & stop'));
+ ts_rank_cd
+------------
+ 0.1
+
- A dictionary is a program that accepts a token as
- input and returns:
-
-
- an array of lexemes if the input token is known to the dictionary
- (notice that one token can produce more than one lexeme)
-
-
-
- an empty array if the dictionary knows the token, but it is a stop word
-
-
-
- NULL if the dictionary does not recognize the input token
-
-
-
-
+
-
PostgreSQL provides predefined dictionaries for
- many languages. There are also several predefined templates that can be
- used to create new dictionaries with custom parameters. If no existing
- dictionary template is suitable, it is possible to create new ones; see the
-
contrib/> area of the PostgreSQL> distribution
- for examples.
-
+ It is up to the specific dictionary how it treats stop words. For example,
+ ispell dictionaries first normalize words and then
+ look at the list of stop words, while Snowball stemmers
+ first check the list of stop words. The reason for the different
+ behavior is an attempt to decrease noise.
+
+
+
+
+
+
Simple Dictionary
+
+ The simple> dictionary template operates by converting the
+ input token to lower case and checking it against a file of stop words.
+ If it is found in the file then NULL> is returned, causing
+ the token to be discarded. If not, the lower-cased form of the word
+ is returned as the normalized lexeme.
+
+
+ Here is an example of a dictionary definition using the simple>
+ template:
+
+CREATE TEXT SEARCH DICTIONARY public.simple_dict (
+ TEMPLATE = pg_catalog.simple,
+ STOPWORDS = english
+);
+
+
+ Here, english is the base name of a file of stop words.
+ The file's full name will be
+ $SHAREDIR/tsearch_data/english.stop>,
+ where $SHAREDIR> means the
+
PostgreSQL installation's shared-data directory,
+ often /usr/local/share/postgresql> (use pg_config
+ --sharedir> to determine it if you're not sure).
+ The file format is simply a list
+ of words, one per line. Blank lines and trailing spaces are ignored,
+ and upper case is folded to lower case, but no other processing is done
+ on the file contents.
+
+
+ Now we can test our dictionary:
+
+SELECT ts_lexize('public.simple_dict','YeS');
+ ts_lexize
+-----------
+ {yes}
+
+SELECT ts_lexize('public.simple_dict','The');
+ ts_lexize
+-----------
+ {}
+
+
+
+
+ Most types of dictionaries rely on configuration files, such as files of
+ stop words. These files must> be stored in UTF-8 encoding.
+ They will be translated to the actual database encoding, if that is
+ different, when they are read into the server.
+
+
+
+
+ Normally, a database session will read a dictionary configuration file
+ only once, when it is first used within the session. If you modify a
+ configuration file and want to force existing sessions to pick up the
+ new contents, issue an ALTER TEXT SEARCH DICTIONARY> command
+ on the dictionary. This can be a dummy> update that doesn't
+ actually change any parameter values.
+
+
+
+
+
+
+
Synonym Dictionary
+
+ This dictionary template is used to create dictionaries that replace a
+ word with a synonym. Phrases are not supported (use the thesaurus
+ template () for that). A synonym
+ dictionary can be used to overcome linguistic problems, for example, to
+ prevent an English stemmer dictionary from reducing the word 'Paris' to
+ 'pari'. It is enough to have a Paris paris line in the
+ synonym dictionary and put it before the english_stem> dictionary:
+
+SELECT * FROM ts_debug('english','Paris');
+ Alias | Description | Token | Dictionaries | Lexized token
+-------+-------------+-------+----------------+----------------------
+ lword | Latin word | Paris | {english_stem} | english_stem: {pari}
+(1 row)
+
+CREATE TEXT SEARCH DICTIONARY synonym (
+ TEMPLATE = synonym,
+ SYNONYMS = my_synonyms
+);
+
+ALTER TEXT SEARCH CONFIGURATION english
+ ALTER MAPPING FOR lword WITH synonym, english_stem;
+
+SELECT * FROM ts_debug('english','Paris');
+ Alias | Description | Token | Dictionaries | Lexized token
+-------+-------------+-------+------------------------+------------------
+ lword | Latin word | Paris | {synonym,english_stem} | synonym: {paris}
+(1 row)
+
+
+
+ The only parameter required by the synonym> template is
+ SYNONYMS>, which is the base name of its configuration file
+ — my_synonyms> in the above example.
+ The file's full name will be
+ $SHAREDIR/tsearch_data/my_synonyms.syn>
+ (where $SHAREDIR> means the
+
PostgreSQL> installation's shared-data directory).
+ The file format is just one line
+ per word to be substituted, with the word followed by its synonym,
+ separated by white space. Blank lines and trailing spaces are ignored,
+ and upper case is folded to lower case.
+
+
+
+
+
+
Thesaurus Dictionary
+
+ A thesaurus dictionary (sometimes abbreviated as
TZ) is
+ a collection of words that includes information about the relationships
+ of words and phrases, i.e., broader terms (
BT), narrower
+ terms (
NT), preferred terms, non-preferred terms, related
+ terms, etc.
+
+
+ Basically a thesaurus dictionary replaces all non-preferred terms by one
+ preferred term and, optionally, preserves the original terms for indexing
+ as well.
PostgreSQL>'s current implementation of the
+ thesaurus dictionary is an extension of the synonym dictionary with added
+ phrase support. A thesaurus dictionary requires
+ a configuration file of the following format:
+
+# this is a comment
+sample word(s) : indexed word(s)
+more sample word(s) : more indexed word(s)
+...
+
+
+ where the colon (:) symbol acts as a delimiter between a
+ a phrase and its replacement.
+
+
+ A thesaurus dictionary uses a subdictionary (which
+ is specified in the dictionary's configuration) to normalize the input
+ text before checking for phrase matches. It is only possible to select one
+ subdictionary. An error is reported if the subdictionary fails to
+ recognize a word. In that case, you should remove the use of the word or
+ teach the subdictionary about it. You can place an asterisk
+ (*) at the beginning of an indexed word to skip applying
+ the subdictionary to it, but all sample words must> be known
+ to the subdictionary.
+
+
+ The thesaurus dictionary chooses the longest match if there are multiple
+ phrases matching the input, and ties are broken by using the last
+ definition.
+
+
+ Stop words recognized by the subdictionary are replaced by a stop
+ word placeholder to record their position. To illustrate this,
+ consider these phrases:
+
+a one the two : swsw
+the one a two : swsw2
+
+
+ Assuming that a> and the> are stop words according
+ to the subdictionary, these two phrases are identical to the thesaurus:
+ they both look like stopword> one>
+ stopword> two>. Input matching this pattern
+ will be replaced by swsw2>, according to the tie-breaking rule.
+
+
+ Since a thesaurus dictionary has the capability to recognize phrases it
+ must remember its state and interact with the parser. A thesaurus dictionary
+ uses these assignments to check if it should handle the next word or stop
+ accumulation. The thesaurus dictionary must be configured
+ carefully. For example, if the thesaurus dictionary is assigned to handle
+ only the lword token, then a thesaurus dictionary
+ definition like one 7> will not work since token type
+ uint is not assigned to the thesaurus dictionary.
+
+
+
+ Thesauruses are used during indexing so any change in the thesaurus
+ dictionary's parameters requires reindexing.
+ For most other dictionary types, small changes such as adding or
+ removing stopwords does not force reindexing.
+
+
+
+
+
Thesaurus Configuration
+
+ To define a new thesaurus dictionary, use the thesaurus>
+ template. For example:
+
+CREATE TEXT SEARCH DICTIONARY thesaurus_simple (
+ TEMPLATE = thesaurus,
+ DictFile = mythesaurus,
+ Dictionary = pg_catalog.english_stem
+);
+
+
+ Here:
+
+
+ thesaurus_simple is the new dictionary's name
+
+
+
+ mythesaurus is the base name of the thesaurus
+ configuration file.
+ (Its full name will be $SHAREDIR/tsearch_data/mythesaurus.ths>,
+ where $SHAREDIR> means the installation shared-data
+ directory.)
+
+
+
+ pg_catalog.english_stem is the subdictionary (here,
+ a Snowball English stemmer) to use for thesaurus normalization.
+ Notice that the subdictionary will have its own
+ configuration (for example, stop words), which is not shown here.
+
+
+
- A text search configuration binds a parser together with a set of
- dictionaries to process the parser's output tokens. For each token
- type that the parser can return, a separate list of dictionaries is
- specified by the configuration. When a token of that type is found
- by the parser, each dictionary in the list is consulted in turn,
- until some dictionary recognizes it as a known word. If it is identified
- as a stop word, or if no dictionary recognizes the token, it will be
- discarded and not indexed or searched for.
- The general rule for configuring a list of dictionaries
- is to place first the most narrow, most specific dictionary, then the more
- general dictionaries, finishing with a very general dictionary, like
- a
Snowball> stemmer or simple>, which
- recognizes everything. For example, for an astronomy-specific search
- (astro_en configuration) one could bind
- lword (latin word) with a synonym dictionary of astronomical
- terms, a general English dictionary and a
Snowball> English
- stemmer:
+ Now it is possible to bind the thesaurus dictionary thesaurus_simple
+ to the desired token types in a configuration, for example:
-ALTER TEXT SEARCH CONFIGURATION astro_en
- ADD MAPPING FOR lword WITH astrosyn, english_ispell, english_stem;
+ALTER TEXT SEARCH CONFIGURATION russian
+ ADD MAPPING FOR lword, lhword, lpart_hword WITH thesaurus_simple;
-
+
-
-
Stop Words
+
+
+
+
Thesaurus Example
- Stop words are words that are very common, appear in almost every
- document, and have no discrimination value. Therefore, they can be ignored
- in the context of full text searching. For example, every English text
- contains words like a and the>, so it is
- useless to store them in an index. However, stop words do affect the
- positions in tsvector, which in turn affect ranking:
+ Consider a simple astronomical thesaurus thesaurus_astro,
+ which contains some astronomical word combinations:
-SELECT to_tsvector('english','in the list of stop words');
- to_tsvector
-----------------------------
- 'list':3 'stop':5 'word':6
+supernovae stars : sn
+crab nebulae : crab
- The mising positions 1,2,4 are because of stop words. Ranks
- calculated for documents with and without stop words are quite different:
+ Below we create a dictionary and bind some token types to
+ an astronomical thesaurus and english stemmer:
-SELECT ts_rank_cd ('{1,1,1,1}', to_tsvector('english','in the list of stop words'), to_tsquery('list & stop'));
- ts_rank_cd
-------------
- 0.5
+CREATE TEXT SEARCH DICTIONARY thesaurus_astro (
+ TEMPLATE = thesaurus,
+ DictFile = thesaurus_astro,
+ Dictionary = english_stem
+);
-SELECT ts_rank_cd ('{1,1,1,1}', to_tsvector('english','list stop words'), to_tsquery('list & stop'));
- ts_rank_cd
+ALTER TEXT SEARCH CONFIGURATION russian
+ ADD MAPPING FOR lword, lhword, lpart_hword WITH thesaurus_astro, english_stem;
+
+
+ Now we can see how it works.
+ ts_lexize is not very useful for testing a thesaurus,
+ because it treats its input as a single token. Instead we can use
+ plainto_tsquery and to_tsvector
+ which will break their input strings into multiple tokens:
+
+SELECT plainto_tsquery('supernova star');
+ plainto_tsquery
+-----------------
+ 'sn'
+
+SELECT to_tsvector('supernova star');
+ to_tsvector
+-------------
+ 'sn':1
+
+
+ In principle, one can use to_tsquery if you quote
+ the argument:
+
+SELECT to_tsquery('''supernova star''');
+ to_tsquery
------------
- 1
+ 'sn'
+ Notice that supernova star matches supernovae
+ stars in thesaurus_astro because we specified
+ the english_stem stemmer in the thesaurus definition.
+ The stemmer removed the e> and s>.
- It is up to the specific dictionary how it treats stop words. For example,
- ispell dictionaries first normalize words and then
- look at the list of stop words, while Snowball stemmers
- first check the list of stop words. The reason for the different
- behavior is an attempt to decrease noise.
+ To index the original phrase as well as the substitute, just include it
+ in the right-hand part of the definition:
+
+supernovae stars : sn supernovae stars
+
+SELECT plainto_tsquery('supernova star');
+ plainto_tsquery
+-----------------------------
+ 'sn' & 'supernova' & 'star'
+
+
+
- simple-dictionary">
-
Simple Dictionary
+ ispell-dictionary">
- The simple> dictionary template operates by converting the
- input token to lower case and checking it against a file of stop words.
- If it is found in the file then NULL> is returned, causing
- the token to be discarded. If not, the lower-cased form of the word
- is returned as the normalized lexeme.
+ The
Ispell> dictionary template supports
+ morphological dictionaries>, which can normalize many
+ different linguistic forms of a word into the same lexeme. For example,
+ an English
Ispell> dictionary can match all declensions and
+ conjugations of the search term bank, e.g.
+ banking>, banked>, banks>,
+ banks'>, and bank's>.
- Here is an example of a dictionary definition using the simple>
- template:
+ The standard
PostgreSQL distribution does
+ not include any
Ispell> configuration files.
+ Dictionaries for a large number of languages are available from
+ url="http://ficus-www.cs.ucla.edu/geoff/ispell.html">Ispell.
+ Also, some more modern dictionary file formats are supported —
+ url="http://en.wikipedia.org/wiki/MySpell">MySpell (OO < 2.0.1)
+ (OO >= 2.0.2). A large list of dictionaries is available on the
+ url="http://wiki.services.openoffice.org/wiki/Dictionaries">OpenOffice
+ Wiki.
+
+
+ To create an
Ispell> dictionary, use the built-in
+ ispell template and specify several parameters:
+
-CREATE TEXT SEARCH DICTIONARY public.simple_dict (
- TEMPLATE = pg_catalog.simple,
- STOPWORDS = english
+CREATE TEXT SEARCH DICTIONARY english_ispell (
+ TEMPLATE = ispell,
+ DictFile = english,
+ AffFile = english,
+ StopWords = english
);
- Here, english is the base name of a file of stop words.
- The file's full name will be
- $SHAREDIR/tsearch_data/english.stop>,
- where $SHAREDIR> means the
-
PostgreSQL installation's shared-data directory,
- often /usr/local/share/postgresql> (use pg_config
- --sharedir> to determine it if you're not sure).
- The file format is simply a list
- of words, one per line. Blank lines and trailing spaces are ignored,
- and upper case is folded to lower case, but no other processing is done
- on the file contents.
+ Here, DictFile>, AffFile>, and StopWords>
+ specify the base names of the dictionary, affixes, and stop-words files.
+ The stop-words file has the same format explained above for the
+ simple> dictionary type. The format of the other files is
+ not specified here but is available from the above-mentioned web sites.
- Now we can test our dictionary:
-
-SELECT ts_lexize('public.simple_dict','YeS');
- ts_lexize
------------
- {yes}
-
-SELECT ts_lexize('public.simple_dict','The');
- ts_lexize
------------
- {}
-
+ Ispell dictionaries usually recognize a limited set of words, so they
+ should be followed by another broader dictionary; for
+ example, a Snowball dictionary, which recognizes everything.
-
- Most types of dictionaries rely on configuration files, such as files of
- stop words. These files must> be stored in UTF-8 encoding.
- They will be translated to the actual database encoding, if that is
- different, when they are read into the server.
-
-
-
-
- Normally, a database session will read a dictionary configuration file
- only once, when it is first used within the session. If you modify a
- configuration file and want to force existing sessions to pick up the
- new contents, issue an ALTER TEXT SEARCH DICTIONARY> command
- on the dictionary. This can be a dummy> update that doesn't
- actually change any parameter values.
-
-
-
-
-
-
-
Synonym Dictionary
-
- This dictionary template is used to create dictionaries that replace a
- word with a synonym. Phrases are not supported (use the thesaurus
- template () for that). A synonym
- dictionary can be used to overcome linguistic problems, for example, to
- prevent an English stemmer dictionary from reducing the word 'Paris' to
- 'pari'. It is enough to have a Paris paris line in the
- synonym dictionary and put it before the english_stem> dictionary:
+ Ispell dictionaries support splitting compound words.
+ This is a nice feature and
+
PostgreSQL supports it.
+ Notice that the affix file should specify a special flag using the
+ compoundwords controlled statement that marks dictionary
+ words that can participate in compound formation:
-SELECT * FROM ts_debug('english','Paris');
- Alias | Description | Token | Dictionaries | Lexized token
--------+-------------+-------+----------------+----------------------
- lword | Latin word | Paris | {english_stem} | english_stem: {pari}
-(1 row)
-
-CREATE TEXT SEARCH DICTIONARY synonym (
- TEMPLATE = synonym,
- SYNONYMS = my_synonyms
-);
-
-ALTER TEXT SEARCH CONFIGURATION english
- ALTER MAPPING FOR lword WITH synonym, english_stem;
+compoundwords controlled z
+
-SELECT * FROM ts_debug('english','Paris');
- Alias | Description | Token | Dictionaries | Lexized token
--------+-------------+-------+------------------------+------------------
- lword | Latin word | Paris | {synonym,english_stem} | synonym: {paris}
-(1 row)
+ Here are some examples for the Norwegian language:
+
+SELECT ts_lexize('norwegian_ispell', 'overbuljongterningpakkmesterassistent');
+ {over,buljong,terning,pakk,mester,assistent}
+SELECT ts_lexize('norwegian_ispell', 'sjokoladefabrikk');
+ {sjokoladefabrikk,sjokolade,fabrikk}
- The only parameter required by the synonym> template is
- SYNONYMS>, which is the base name of its configuration file
- — my_synonyms> in the above example.
- The file's full name will be
- $SHAREDIR/tsearch_data/my_synonyms.syn>
- (where $SHAREDIR> means the
-
PostgreSQL> installation's shared-data directory).
- The file format is just one line
- per word to be substituted, with the word followed by its synonym,
- separated by white space. Blank lines and trailing spaces are ignored,
- and upper case is folded to lower case.
-
+
+
MySpell> does not support compound words.
+
Hunspell> has sophisticated support for compound words. At
+ present,
PostgreSQL implements only the basic
+ compound word operations of Hunspell.
+
+
-
-
Thesaurus Dictionary
-
- A thesaurus dictionary (sometimes abbreviated as
TZ) is
- a collection of words that includes information about the relationships
- of words and phrases, i.e., broader terms (
BT), narrower
- terms (
NT), preferred terms, non-preferred terms, related
- terms, etc.
-
+
- Basically a thesaurus dictionary replaces all non-preferred terms by one
- preferred term and, optionally, preserves the original terms for indexing
- as well.
PostgreSQL>'s current implementation of the
- thesaurus dictionary is an extension of the synonym dictionary with added
- phrase support. A thesaurus dictionary requires
- a configuration file of the following format:
+ The
Snowball> dictionary template is based on the project
+ of Martin Porter, inventor of the popular Porter's stemming algorithm
+ for the English language. Snowball now provides stemming algorithms for
+ many languages (see the
Snowball
+ site for more information). Each algorithm understands how to
+ reduce common variant forms of words to a base, or stem, spelling within
+ its language. A Snowball dictionary requires a language>
+ parameter to identify which stemmer to use, and optionally can specify a
+ stopword> file name that gives a list of words to eliminate.
+ (
PostgreSQL's standard stopword lists are also
+ provided by the Snowball project.)
+ For example, there is a built-in definition equivalent to
-# this is a comment
-sample word(s) : indexed word(s)
-more sample word(s) : more indexed word(s)
-...
+CREATE TEXT SEARCH DICTIONARY english_stem (
+ TEMPLATE = snowball,
+ Language = english,
+ StopWords = english
+);
- where the colon (:) symbol acts as a delimiter between a
- a phrase and its replacement.
+ The stopword file format is the same as already explained.
- A thesaurus dictionary uses a subdictionary (which
- is defined in the dictionary's configuration) to normalize the input text
- before checking for phrase matches. It is only possible to select one
- subdictionary. An error is reported if the subdictionary fails to
- recognize a word. In that case, you should remove the use of the word or teach
- the subdictionary about it. Use an asterisk (*) at the
- beginning of an indexed word to skip the subdictionary. It is still required
- that sample words are known.
+ A
Snowball> dictionary recognizes everything, whether
+ or not it is able to simplify the word, so it should be placed
+ at the end of the dictionary list. It it useless to have it
+ before any other dictionary because a token will never pass through it to
+ the next dictionary.
+
+
+
+
+
+
+
+
Configuration Example
+
+ A text search configuration specifies all options necessary to transform a
+ document into a tsvector: the parser to use to break text
+ into tokens, and the dictionaries to use to transform each token into a
+ lexeme. Every call of
+ to_tsvector or to_tsquery
+ needs a text search configuration to perform its processing.
+ The configuration parameter
+
+ specifies the name of the default configuration, which is the
+ one used by text search functions if an explicit configuration
+ parameter is omitted.
+ It can be set in postgresql.conf, or set for an
+ individual session using the SET> command.
- The thesaurus dictionary looks for the longest match.
+ Several predefined text search configurations are available, and
+ you can create custom configurations easily. To facilitate management
+ of text search objects, a set of
SQL commands
+ is available, and there are several psql commands that display information
+ about text search objects ().
- Stop words recognized by the subdictionary are replaced by a stop word
- placeholder to record their position. To break possible ties the thesaurus
- uses the last definition. To illustrate this, consider a thesaurus (with
- a
simple subdictionary) with pattern
- swsw>, where s> designates any stop word and
- w>, any known word:
+ As an example, we will create a configuration
+ pg, starting from a duplicate of the built-in
+ english> configuration.
-a one the two : swsw
-the one a two : swsw2
+CREATE TEXT SEARCH CONFIGURATION public.pg ( COPY = pg_catalog.english );
-
- Words a> and the> are stop words defined in the
- configuration of a subdictionary. The thesaurus considers the
- one the two and that one then two as equal
- and will use definition swsw2>.
- Since a thesaurus dictionary has the capability to recognize phrases it
- must remember its state and interact with the parser. A thesaurus dictionary
- uses these assignments to check if it should handle the next word or stop
- accumulation. The thesaurus dictionary must be configured
- carefully. For example, if the thesaurus dictionary is assigned to handle
- only the lword token, then a thesaurus dictionary
- definition like ' one 7' will not work since token type
- uint is not assigned to the thesaurus dictionary.
-
-
-
- Thesauruses are used during indexing so any change in the thesaurus
- dictionary's parameters requires reindexing.
- For most other dictionary types, small changes such as adding or
- removing stopwords does not force reindexing.
-
-
+ We will use a PostgreSQL-specific synonym list
+ and store it in $SHAREDIR/tsearch_data/pg_dict.syn.
+ The file contents look like:
-
-
Thesaurus Configuration
+postgres pg
+pgsql pg
+postgresql pg
+
- To define a new thesaurus dictionary, use the thesaurus>
- template. For example:
+ We define the synonym dictionary like this:
-CREATE TEXT SEARCH DICTIONARY thesaurus_simple (
- TEMPLATE = thesaurus,
- DictFile = mythesaurus,
- Dictionary = pg_catalog.english_stem
+CREATE TEXT SEARCH DICTIONARY pg_dict (
+ TEMPLATE = synonym,
+ SYNONYMS = pg_dict
);
- Here:
-
-
- thesaurus_simple is the new dictionary's name
-
-
-
- mythesaurus is the base name of the thesaurus
- configuration file.
- (Its full name will be $SHAREDIR/tsearch_data/mythesaurus.ths>,
- where $SHAREDIR> means the installation shared-data
- directory.)
-
-
-
- pg_catalog.english_stem is the subdictionary (here,
- a Snowball English stemmer) to use for thesaurus normalization.
- Notice that the subdictionary will have its own
- configuration (for example, stop words), which is not shown here.
-
-
-
+ Next we register the
ispell> dictionary
+ english_ispell, which has its own configuration files:
- Now it is possible to bind the thesaurus dictionary thesaurus_simple
- to the desired token types, for example:
+CREATE TEXT SEARCH DICTIONARY english_ispell (
+ TEMPLATE = ispell,
+ DictFile = english,
+ AffFile = english,
+ StopWords = english
+);
+
+
+ Now we can set up the mappings for Latin words for configuration
+ pg>:
-ALTER TEXT SEARCH CONFIGURATION russian
- ADD MAPPING FOR lword, lhword, lpart_hword WITH thesaurus_simple;
+ALTER TEXT SEARCH CONFIGURATION pg
+ ALTER MAPPING FOR lword, lhword, lpart_hword
+ WITH pg_dict, english_ispell, english_stem;
-
-
+ We choose not to index or search some token types that the built-in
+ configuration does handle:
-
-
Thesaurus Example
+ALTER TEXT SEARCH CONFIGURATION pg
+ DROP MAPPING FOR email, url, sfloat, uri, float;
+
+
- Consider a simple astronomical thesaurus thesaurus_astro,
- which contains some astronomical word combinations:
+ Now we can test our configuration:
-supernovae stars : sn
-crab nebulae : crab
+SELECT * FROM ts_debug('public.pg', '
+PostgreSQL, the highly scalable, SQL compliant, open source object-relational
+database management system, is now undergoing beta testing of the next
+version of our software.
+');
+
- Below we create a dictionary and bind some token types with
- an astronomical thesaurus and english stemmer:
+ The next step is to set the session to use the new configuration, which was
+ created in the public> schema:
-CREATE TEXT SEARCH DICTIONARY thesaurus_astro (
- TEMPLATE = thesaurus,
- DictFile = thesaurus_astro,
- Dictionary = english_stem
-);
+=> \dF
+ List of text search configurations
+ Schema | Name | Description
+---------+------+-------------
+ public | pg |
-ALTER TEXT SEARCH CONFIGURATION russian
- ADD MAPPING FOR lword, lhword, lpart_hword WITH thesaurus_astro, english_stem;
+SET default_text_search_config = 'public.pg';
+SET
+
+SHOW default_text_search_config;
+ default_text_search_config
+----------------------------
+ public.pg
+
- Now we can see how it works.
- ts_lexize is not very useful for testing a thesaurus,
- because it treats its input as a single token. Instead we can use
- plainto_tsquery and to_tsvector
- which will break their input strings into multiple tokens:
+
-SELECT plainto_tsquery('supernova star');
- plainto_tsquery
------------------
- 'sn'
+
+
Testing and Debugging Text Search
-SELECT to_tsvector('supernova star');
- to_tsvector
--------------
- 'sn':1
-
+ The behavior of a custom text search configuration can easily become
+ complicated enough to be confusing or undesirable. The functions described
+ in this section are useful for testing text search objects. You can
+ test a complete configuration, or test parsers and dictionaries separately.
+
+
+
+
Configuration Testing
+
+ The function ts_debug allows easy testing of a
+ text search configuration.
+
+
+
+
+
+
+ ts_debug( config regconfig>, document text>) returns setof ts_debug>
+
+
+ ts_debug> displays information about every token of
+ document as produced by the
+ parser and processed by the configured dictionaries. It uses the
+ configuration specified by
+ class="PARAMETER">config,
+ or default_text_search_config if that argument is
+ omitted.
+
- In principle, one can use to_tsquery if you quote
- the argument:
+ ts_debug>'s result row type is defined as:
-SELECT to_tsquery('''supernova star''');
- to_tsquery
-------------
- 'sn'
+CREATE TYPE ts_debug AS (
+ "Alias" text,
+ "Description" text,
+ "Token" text,
+ "Dictionaries" regdictionary[],
+ "Lexized token" text
+);
- Notice that supernova star matches supernovae
- stars in thesaurus_astro because we specified
- the english_stem stemmer in the thesaurus definition.
- The stemmer removed the e>.
-
+ One row is produced for each token identified by the parser.
+ The first three columns describe the token, and the fourth lists
+ the dictionaries selected by the configuration for that token's type.
+ The last column shows the result of dictionary processing: which
+ dictionary (if any) recognized the token, and what it produced.
+
- To index the original phrase as well as the substitute, just include it
- in the right-hand part of the definition:
+ Here is a simple example:
-supernovae stars : sn supernovae stars
-
-SELECT plainto_tsquery('supernova star');
- plainto_tsquery
------------------------------
- 'sn' & 'supernova' & 'star'
+SELECT * FROM ts_debug('english','a fat cat sat on a mat - it ate a fat rats');
+ Alias | Description | Token | Dictionaries | Lexized token
+-------+---------------+-------+--------------+----------------
+ lword | Latin word | a | {english} | english: {}
+ blank | Space symbols | | |
+ lword | Latin word | fat | {english} | english: {fat}
+ blank | Space symbols | | |
+ lword | Latin word | cat | {english} | english: {cat}
+ blank | Space symbols | | |
+ lword | Latin word | sat | {english} | english: {sat}
+ blank | Space symbols | | |
+ lword | Latin word | on | {english} | english: {}
+ blank | Space symbols | | |
+ lword | Latin word | a | {english} | english: {}
+ blank | Space symbols | | |
+ lword | Latin word | mat | {english} | english: {mat}
+ blank | Space symbols | | |
+ blank | Space symbols | - | |
+ lword | Latin word | it | {english} | english: {}
+ blank | Space symbols | | |
+ lword | Latin word | ate | {english} | english: {ate}
+ blank | Space symbols | | |
+ lword | Latin word | a | {english} | english: {}
+ blank | Space symbols | | |
+ lword | Latin word | fat | {english} | english: {fat}
+ blank | Space symbols | | |
+ lword | Latin word | rats | {english} | english: {rat}
+ (24 rows)
-
-
-
-
-
-
-
-
- The
Ispell> dictionary template supports
- morphological dictionaries>, which can normalize many
- different linguistic forms of a word into the same lexeme. For example,
- an English
Ispell> dictionary can match all declensions and
- conjugations of the search term bank, e.g.
- banking>, banked>, banks>,
- banks'>, and bank's>.
-
-
- The standard
PostgreSQL distribution does
- not include any
Ispell> configuration files.
- Dictionaries for a large number of languages are available from
- url="http://ficus-www.cs.ucla.edu/geoff/ispell.html">Ispell.
- Also, some more modern dictionary file formats are supported —
- url="http://en.wikipedia.org/wiki/MySpell">MySpell (OO < 2.0.1)
- (OO >= 2.0.2). A large list of dictionaries is available on the
- url="http://wiki.services.openoffice.org/wiki/Dictionaries">OpenOffice
- Wiki.
-
+
- To create an
Ispell> dictionary, use the built-in
- ispell template and specify several parameters:
-
+ For a more extensive demonstration, we
+ first create a public.english configuration and
+ ispell dictionary for the English language:
+
+CREATE TEXT SEARCH CONFIGURATION public.english ( COPY = pg_catalog.english );
+
CREATE TEXT SEARCH DICTIONARY english_ispell (
TEMPLATE = ispell,
DictFile = english,
AffFile = english,
StopWords = english
);
+
+ALTER TEXT SEARCH CONFIGURATION public.english
+ ALTER MAPPING FOR lword WITH english_ispell, english_stem;
- Here, DictFile>, AffFile>, and StopWords>
- specify the base names of the dictionary, affixes, and stop-words files.
- The stop-words file has the same format explained above for the
- simple> dictionary type. The format of the other files is
- not specified here but is available from the above-mentioned web sites.
-
+SELECT * FROM ts_debug('public.english','The Brightest supernovaes');
+ Alias | Description | Token | Dictionaries | Lexized token
+-------+---------------+-------------+-------------------------------------------------+-------------------------------------
+ lword | Latin word | The | {public.english_ispell,pg_catalog.english_stem} | public.english_ispell: {}
+ blank | Space symbols | | |
+ lword | Latin word | Brightest | {public.english_ispell,pg_catalog.english_stem} | public.english_ispell: {bright}
+ blank | Space symbols | | |
+ lword | Latin word | supernovaes | {public.english_ispell,pg_catalog.english_stem} | pg_catalog.english_stem: {supernova}
+(5 rows)
+
- Ispell dictionaries usually recognize a limited set of words, so they
- should be followed by another broader dictionary; for
- example, a Snowball dictionary, which recognizes everything.
-
+ In this example, the word Brightest> was recognized by the
+ parser as a Latin word (alias lword).
+ For this token type the dictionary list is
+ public.english_ispell> and
+ pg_catalog.english_stem. The word was recognized by
+ public.english_ispell, which reduced it to the noun
+ bright. The word supernovaes is
+ unknown to the public.english_ispell dictionary so it
+ was passed to the next dictionary, and, fortunately, was recognized (in
+ fact, public.english_stem is a Snowball dictionary which
+ recognizes everything; that is why it was placed at the end of the
+ dictionary list).
+
- Ispell dictionaries support splitting compound words.
- This is a nice feature and
-
PostgreSQL supports it.
- Notice that the affix file should specify a special flag using the
- compoundwords controlled statement that marks dictionary
- words that can participate in compound formation:
+ The word The was recognized by the
+
public.english_ispell dictionary as a stop word (
+ linkend="textsearch-stopwords">) and will not be indexed.
+ The spaces are discarded too, since the configuration provides no
+ dictionaries at all for them.
+
+
+ You can reduce the volume of output by explicitly specifying which columns
+ you want to see:
-compoundwords controlled z
+SELECT "Alias", "Token", "Lexized token"
+FROM ts_debug('public.english','The Brightest supernovaes');
+ Alias | Token | Lexized token
+-------+-------------+--------------------------------------
+ lword | The | public.english_ispell: {}
+ blank | |
+ lword | Brightest | public.english_ispell: {bright}
+ blank | |
+ lword | supernovaes | pg_catalog.english_stem: {supernova}
+(5 rows)
+
- Here are some examples for the Norwegian language:
+
-SELECT ts_lexize('norwegian_ispell','overbuljongterningpakkmesterassistent');
- {over,buljong,terning,pakk,mester,assistent}
-SELECT ts_lexize('norwegian_ispell','sjokoladefabrikk');
- {sjokoladefabrikk,sjokolade,fabrikk}
-
-
+
+
Parser Testing
-
-
MySpell> does not support compound words.
-
Hunspell> has sophisticated support for compound words. At
- present,
PostgreSQL implements only the basic
- compound word operations of Hunspell.
-
-
+ The following functions allow direct testing of a text search parser.
+
-
+
+
-
+
+ ts_parse(parser_name text>, document text>, OUT tokid> integer>, OUT token> text>) returns setof record>
+ ts_parse(parser_oid oid>, document text>, OUT tokid> integer>, OUT token> text>) returns setof record>
+
- The
Snowball> dictionary template is based on the project
- of Martin Porter, inventor of the popular Porter's stemming algorithm
- for the English language. Snowball now provides stemming algorithms for
- many languages (see the
Snowball
- site for more information). Each algorithm understands how to
- reduce common variant forms of words to a base, or stem, spelling within
- its language. A Snowball dictionary requires a language>
- parameter to identify which stemmer to use, and optionally can specify a
- stopword> file name that gives a list of words to eliminate.
- (
PostgreSQL's standard stopword lists are also
- provided by the Snowball project.)
- For example, there is a built-in definition equivalent to
+ ts_parse> parses the given document
+ and returns a series of records, one for each token produced by
+ parsing. Each record includes a tokid showing the
+ assigned token type and a token which is the text of the
+ token. For example:
-CREATE TEXT SEARCH DICTIONARY english_stem (
- TEMPLATE = snowball,
- Language = english,
- StopWords = english
-);
+SELECT * FROM ts_parse('default', '123 - a number');
+ tokid | token
+-------+--------
+ 22 | 123
+ 12 |
+ 12 | -
+ 1 | a
+ 12 |
+ 1 | number
+
- The stopword file format is the same as already explained.
-
+
+
- A
Snowball> dictionary recognizes everything, whether
- or not it is able to simplify the word, so it should be placed
- at the end of the dictionary list. It it useless to have it
- before any other dictionary because a token will never pass through it to
- the next dictionary.
+
+ ts_token_type(parser_name> text>, OUT tokid> integer>, OUT alias> text>, OUT description> text>) returns setof record>
+ ts_token_type(parser_oid> oid>, OUT tokid> integer>, OUT alias> text>, OUT description> text>) returns setof record>
+
+
+ ts_token_type> returns a table which describes each type of
+ token the specified parser can recognize. For each token type, the table
+ gives the integer tokid that the parser uses to label a
+ token of that type, the alias that names the token type
+ in configuration commands, and a short description. For
+ example:
+
+SELECT * FROM ts_token_type('default');
+ tokid | alias | description
+-------+--------------+-----------------------------------
+ 1 | lword | Latin word
+ 2 | nlword | Non-latin word
+ 3 | word | Word
+ 4 | email | Email
+ 5 | url | URL
+ 6 | host | Host
+ 7 | sfloat | Scientific notation
+ 8 | version | VERSION
+ 9 | part_hword | Part of hyphenated word
+ 10 | nlpart_hword | Non-latin part of hyphenated word
+ 11 | lpart_hword | Latin part of hyphenated word
+ 12 | blank | Space symbols
+ 13 | tag | HTML Tag
+ 14 | protocol | Protocol head
+ 15 | hword | Hyphenated word
+ 16 | lhword | Latin hyphenated word
+ 17 | nlhword | Non-latin hyphenated word
+ 18 | uri | URI
+ 19 | file | File or path name
+ 20 | float | Decimal notation
+ 21 | int | Signed integer
+ 22 | uint | Unsigned integer
+ 23 | entity | HTML Entity
+
Dictionary Testing
- The ts_lexize> function facilitates dictionary testing:
-
-
+ The ts_lexize> function facilitates dictionary testing.
+
-
+
+
- >
- >
+ >
+ ts_lexize(dict regdictionary>, token text>) returns text[]>
+ >
-
-
- ts_lexize(dict_name text, token text) returns text[]
-
-
+ ts_lexize> returns an array of lexemes if the input
+ token is known to the dictionary,
+ or an empty array if the token
+ is known to the dictionary but it is a stop word, or
+ NULL if it is an unknown word.
+
-
- Returns an array of lexemes if the input
- token is known to the dictionary
- dict_name, or an empty array if the token
- is known to the dictionary but it is a stop word, or
- NULL if it is an unknown word.
-
+ Examples:
SELECT ts_lexize('english_stem', 'stars');
-----------
{}
-
-
-
-
- The ts_lexize function expects a
- token, not text. Below is an example:
+ The ts_lexize function expects a single
+ token, not text. Here is a case
+ where this can be confusing:
SELECT ts_lexize('thesaurus_astro','supernovae stars') is null;
t
- The thesaurus dictionary thesaurus_astro does know
- supernovae stars, but ts_lexize> fails since it
- does not parse the input text and considers it as a single token. Use
- plainto_tsquery> and to_tsvector> to test thesaurus
- dictionaries:
+ The thesaurus dictionary thesaurus_astro does know the
+ phrase supernovae stars, but ts_lexize>
+ fails since it does not parse the input text but treats it as a single
+ token. Use plainto_tsquery> or to_tsvector> to
+ test thesaurus dictionaries, for example:
SELECT plainto_tsquery('supernovae stars');
- Also, the
ts_debug function (
- linkend="textsearch-debugging">) is helpful for testing dictionaries.
-
-
-
-
-
-
Configuration Example
-
- A text search configuration specifies all options necessary to transform a
- document into a tsvector: the parser to use to break text
- into tokens, and the dictionaries to use to transform each token into a
- lexeme. Every call of
- to_tsvector() or to_tsquery()
- needs a text search configuration to perform its processing.
- The configuration parameter
-
- specifies the name of the current default configuration, which is the
- one used by text search functions if an explicit configuration
- parameter is omitted.
- It can be set in postgresql.conf, or set for an
- individual session using the SET> command.
-
-
- Several predefined text search configurations are available, and
- you can create custom configurations easily. To facilitate management
- of text search objects, a set of
SQL commands
- is available, and there are several psql commands that display information
- about text search objects ().
-
-
- As an example, we will create a configuration
- pg which starts as a duplicate of the
- english> configuration. To be safe, we do this in a transaction:
-
-BEGIN;
-
-CREATE TEXT SEARCH CONFIGURATION public.pg ( COPY = english );
-
-
-
- We will use a PostgreSQL-specific synonym list
- and store it in $SHAREDIR/tsearch_data/pg_dict.syn.
- The file contents look like:
-
-postgres pg
-pgsql pg
-postgresql pg
-
-
- We define the dictionary like this:
-
-CREATE TEXT SEARCH DICTIONARY pg_dict (
- TEMPLATE = synonym,
- SYNONYMS = pg_dict
-);
-
-
- Next we register the
ispell> dictionary
- english_ispell:
-
-CREATE TEXT SEARCH DICTIONARY english_ispell (
- TEMPLATE = ispell,
- DictFile = english,
- AffFile = english,
- StopWords = english
-);
-
-
- Now modify the mappings for Latin words for configuration pg>:
-
-ALTER TEXT SEARCH CONFIGURATION pg
- ALTER MAPPING FOR lword, lhword, lpart_hword
- WITH pg_dict, english_ispell, english_stem;
-
-
- We do not index or search some token types:
-
-ALTER TEXT SEARCH CONFIGURATION pg
- DROP MAPPING FOR email, url, sfloat, uri, float;
-
-
-
- Now, we can test our configuration:
-
-COMMIT;
-
-SELECT * FROM ts_debug('public.pg', '
-PostgreSQL, the highly scalable, SQL compliant, open source object-relational
-database management system, is now undergoing beta testing of the next
-version of our software.
-');
-
-
-
- The next step is to set the session to use the new configuration, which was
- created in the public> schema:
-
-=> \dF
- List of text search configurations
- Schema | Name | Description
----------+------+-------------
- public | pg |
-
-SET default_text_search_config = 'public.pg';
-SET
-
-SHOW default_text_search_config;
- default_text_search_config
-----------------------------
- public.pg
-
-
-
- index
+ indexes
-
There are two kinds of indexes that can be used to speed up full text
searches.
-
- GiST
-
-
GiST
-
- GIN
-
-
GIN
+ There are substantial performance differences between the two index types,
+ so it is important to understand which to use.
+
+
A GiST index is lossy, meaning it is necessary
to check the actual table row to eliminate false matches.
Filter: (textsearch @@ '''supernova'''::tsquery)
- GiST index lossiness happens because each document is represented by a
- fixed-length signature. The signature is generated by hashing (crc32) each
- word into a random bit in an n-bit string and all words combine to produce
- an n-bit document signature. Because of hashing there is a chance that
- some words hash to the same position and could result in a false hit.
- Signatures calculated for each document in a collection are stored in an
- RD-tree (Russian Doll tree), invented by Hellerstein,
- which is an adaptation of R-tree for sets. In our case
- the transitive containment relation is realized by
- superimposed coding (Knuth, 1973) of signatures, i.e., a parent is the
- result of 'OR'-ing the bit-strings of all children. This is a second
- factor of lossiness. It is clear that parents tend to be full of
- 1>s (degenerates) and become quite useless because of the
- limited selectivity. Searching is performed as a bit comparison of a
- signature representing the query and an RD-tree entry.
- If all 1>s of both signatures are in the same position we
- say that this branch probably matches the query, but if there is even one
- discrepancy we can definitely reject this branch.
-
-
- Lossiness causes serious performance degradation since random access of
- heap records is slow and limits the usefulness of GiST
- indexes. The likelihood of false hits depends on several factors, like
- the number of unique words, so using dictionaries to reduce this number
- is recommended.
+ GiST indexes are lossy because each document is represented in the
+ index by a fixed-length signature. The signature is generated by hashing
+ each word into a random bit in an n-bit string, with all these bits OR-ed
+ together to produce an n-bit document signature. When two words hash to
+ the same bit position there will be a false match, and if all words in
+ the query have matches (real or false) then the table row must be
+ retrieved to see if the match is correct.
- Actually, this is not the whole story. GiST indexes have an optimization
- for storing small tsvectors (under TOAST_INDEX_TARGET
- bytes, 512 bytes by default). On leaf pages small tsvectors are stored unchanged,
- while longer ones are represented by their signatures, which introduces
- some lossiness. Unfortunately, the existing index API does not allow for
- a return value to say whether it found an exact value (tsvector) or whether
- the result needs to be checked. This is why the GiST index is
- currently marked as lossy. We hope to improve this in the future.
+ Lossiness causes performance degradation since random access to table
+ records is slow; this limits the usefulness of GiST indexes. The
+ likelihood of false matches depends on several factors, in particular the
+ number of unique words, so using dictionaries to reduce this number is
+ recommended.
- There is one side-effect of the non-lossiness of a GIN index when using
- query labels/weights, like 'supernovae:a'. A GIN index
- has all the information necessary to determine a match, so the heap is
- not accessed. However, label information is not stored in the index,
- so if the query involves label weights it must access
- the heap. Therefore, a special full text search operator @@@
- was created that forces the use of the heap to get information about
- labels. GiST indexes are lossy so it always reads the heap and there is
- no need for a special operator. In the example below,
- fulltext_idx is a GIN index:
-
-EXPLAIN SELECT * FROM apod WHERE textsearch @@@ to_tsquery('supernovae:a');
- QUERY PLAN
-------------------------------------------------------------------------
- Index Scan using textsearch_idx on apod (cost=0.00..12.30 rows=2 width=1469)
- Index Cond: (textsearch @@@ '''supernova'':A'::tsquery)
- Filter: (textsearch @@@ '''supernova'':A'::tsquery)
-
+ Actually, GIN indexes store only the words (lexemes) of tsvector>
+ values, and not their weight labels. Thus, while a GIN index can be
+ considered non-lossy for a query that does not specify weights, it is
+ lossy for one that does. Thus a table row recheck is needed when using
+ a query that involves weights. Unfortunately, in the current design of
+
PostgreSQL>, whether a recheck is needed is a static
+ property of a particular operator, and not something that can be enabled
+ or disabled on-the-fly depending on the values given to the operator.
+ To deal with this situation without imposing the overhead of rechecks
+ on queries that do not need them, the following approach has been
+ adopted:
+
+
+
+
+ The standard text match operator @@> is marked as non-lossy
+ for GIN indexes.
+
+
+
+
+ An additional match operator @@@> is provided, and marked
+ as lossy for GIN indexes. This operator behaves exactly like
+ @@> otherwise.
+
+
+
+
+ When a GIN index search is initiated with the @@> operator,
+ the index support code will throw an error if the query specifies any
+ weights. This protects against giving wrong answers due to failure
+ to recheck the weights.
+
+
+
+ In short, you must use @@@> rather than @@> to
+ perform GIN index searches on queries that involve weight restrictions.
+ For queries that do not have weight restrictions, either operator will
+ work, but @@> will be faster.
+ This awkwardness will probably be addressed in a future release of
- In choosing which index type to use, GiST or GIN, consider these differences:
+ In choosing which index type to use, GiST or GIN, consider these
+ performance differences:
+
- GIN is about ten times slower to update than GiST
+ GIN indexes are about ten times slower to update than GiST
- In summary,
GIN indexes are best for static data because
- the indexes are faster for lookups. For dynamic data, GiST indexes are
+ As a rule of thumb,
GIN indexes are best for static data
+ because lookups are faster. For dynamic data, GiST indexes are
faster to update. Specifically,
GiST indexes are very
good for dynamic data and fast if the number of unique words (lexemes) is
- under 100,000, while
GIN handles 100,000+ lexemes better
- but is slower to update.
+ under 100,000, while
GIN indexes will handle 100,000+
+ lexemes better but are slower to update.
Partitioning of big collections and the proper use of GiST and GIN indexes
allows the implementation of very fast searches with online update.
Partitioning can be done at the database level using table inheritance
- and constraint_exclusion>, or distributing documents over
+ and constraint_exclusion>, or by distributing documents over
servers and collecting search results using the contrib/dblink>
extension module. The latter is possible because ranking functions use
only local information.
The length of each lexeme must be less than 2K bytes
-
The length of a tsvector (lexemes + positions) must be less than 1 megabyte
+
The length of a tsvector (lexemes + positions) must be
+ less than 1 megabyte
-
The number of lexemes must be less than 264
+
+
The number of lexemes must be less than
+ 264
-
Positional information must be greater than 0 and less than 16,383
+
Position values in tsvector> must be greater than 0 and
+ no more than 16,383
No more than 256 positions per lexeme
-
The number of nodes (lexemes + operations) in a tsquery must be less than 32,768
+
The number of nodes (lexemes + operators) in a tsquery
+ must be less than 32,768
For comparison, the
PostgreSQL 8.1 documentation
- contained 10,441 unique words, a total of 335,420 words, and the most frequent
- word postgresql> was mentioned 6,127 times in 655 documents.
+ contained 10,441 unique words, a total of 335,420 words, and the most
+ frequent word postgresql> was mentioned 6,127 times in 655
+ documents.
- Another example — the
PostgreSQL mailing
list
- archives contained 910,989 unique words with 57,491,343 lexemes in 461,020
- messages.
+ Another example — the
PostgreSQL mailing
+ list archives contained 910,989 unique words with 57,491,343 lexemes in
+ 461,020 messages.
-
-
Debugging
-
- The function ts_debug allows easy testing of a
- text search configuration.
-
-
-
- ts_debug( config_name, document text) returns SETOF ts_debug
-
-
- ts_debug> displays information about every token of
- document as produced by the
- parser and processed by the configured dictionaries using the configuration
- specified by config_name.
-
-
- ts_debug>'s result type is defined as:
-
-CREATE TYPE ts_debug AS (
- "Alias" text,
- "Description" text,
- "Token" text,
- "Dictionaries" regdictionary[],
- "Lexized token" text
-);
-
-
-
- For a demonstration of how function ts_debug works we
- first create a public.english configuration and
- ispell dictionary for the English language:
-
-
-CREATE TEXT SEARCH CONFIGURATION public.english ( COPY = pg_catalog.english );
-
-CREATE TEXT SEARCH DICTIONARY english_ispell (
- TEMPLATE = ispell,
- DictFile = english,
- AffFile = english,
- StopWords = english
-);
-
-ALTER TEXT SEARCH CONFIGURATION public.english
- ALTER MAPPING FOR lword WITH english_ispell, english_stem;
-
-
-SELECT * FROM ts_debug('public.english','The Brightest supernovaes');
- Alias | Description | Token | Dictionaries | Lexized token
--------+---------------+-------------+-------------------------------------------------+-------------------------------------
- lword | Latin word | The | {public.english_ispell,pg_catalog.english_stem} | public.english_ispell: {}
- blank | Space symbols | | |
- lword | Latin word | Brightest | {public.english_ispell,pg_catalog.english_stem} | public.english_ispell: {bright}
- blank | Space symbols | | |
- lword | Latin word | supernovaes | {public.english_ispell,pg_catalog.english_stem} | pg_catalog.english_stem: {supernova}
-(5 rows)
-
-
- In this example, the word Brightest> was recognized by the
- parser as a Latin word (alias lword).
- For this token type the dictionary list is
- public.english_ispell> and
- pg_catalog.english_stem. The word was recognized by
- public.english_ispell, which reduced it to the noun
- bright. The word supernovaes is unknown
- to the public.english_ispell dictionary so it was passed to
- the next dictionary, and, fortunately, was recognized (in fact,
- public.english_stem is a Snowball dictionary which
- recognizes everything; that is why it was placed at the end of the
- dictionary list).
-
-
- The word The was recognized by public.english_ispell
- dictionary as a stop word () and will not be indexed.
-
+
+
Migration from Pre-8.3 Text Search
- You can always explicitly specify which columns you want to see:
-
-SELECT "Alias", "Token", "Lexized token"
-FROM ts_debug('public.english','The Brightest supernovaes');
- Alias | Token | Lexized token
--------+-------------+--------------------------------------
- lword | The | public.english_ispell: {}
- blank | |
- lword | Brightest | public.english_ispell: {bright}
- blank | |
- lword | supernovaes | pg_catalog.english_stem: {supernova}
-(5 rows)
-
+ This needs to be written ...