Class SynonymFilter

  • All Implemented Interfaces:
    java.io.Closeable, java.lang.AutoCloseable

    public final class SynonymFilter
    extends TokenFilter
    Matches single or multi word synonyms in a token stream. This token stream cannot properly handle position increments != 1, ie, you should place this filter before filtering out stop words.

    Note that with the current implementation, parsing is greedy, so whenever multiple parses would apply, the rule starting the earliest and parsing the most tokens wins. For example if you have these rules:

       a -> x
       a b -> y
       b c d -> z
     
    Then input a b c d e parses to y b c d, ie the 2nd rule "wins" because it started earliest and matched the most input tokens of other rules starting at that point.

    A future improvement to this filter could allow non-greedy parsing, such that the 3rd rule would win, and also separately allow multiple parses, such that all 3 rules would match, perhaps even on a rule by rule basis.

    NOTE: when a match occurs, the output tokens associated with the matching rule are "stacked" on top of the input stream (if the rule had keepOrig=true) and also on top of another matched rule's output tokens. This is not a correct solution, as really the output should be an arbitrary graph/lattice. For example, with the above match, you would expect an exact PhraseQuery "y b c" to match the parsed tokens, but it will fail to do so. This limitation is necessary because Lucene's TokenStream (and index) cannot yet represent an arbitrary graph.

    NOTE: If multiple incoming tokens arrive on the same position, only the first token at that position is used for parsing. Subsequent tokens simply pass through and are not parsed. A future improvement would be to allow these tokens to also be matched.

    • Constructor Detail

      • SynonymFilter

        public SynonymFilter​(TokenStream input,
                             SynonymMap synonyms,
                             boolean ignoreCase)
        Parameters:
        input - input tokenstream
        synonyms - synonym map
        ignoreCase - case-folds input for matching with Character.toLowerCase(int). Note, if you set this to true, its your responsibility to lowercase the input entries when you create the SynonymMap
    • Method Detail

      • incrementToken

        public boolean incrementToken()
                               throws java.io.IOException
        Description copied from class: TokenStream
        Consumers (i.e., IndexWriter) use this method to advance the stream to the next token. Implementing classes must implement this method and update the appropriate AttributeImpls with the attributes of the next token.

        The producer must make no assumptions about the attributes after the method has been returned: the caller may arbitrarily change it. If the producer needs to preserve the state for subsequent calls, it can use AttributeSource.captureState() to create a copy of the current attribute state.

        This method is called for every token of a document, so an efficient implementation is crucial for good performance. To avoid calls to AttributeSource.addAttribute(Class) and AttributeSource.getAttribute(Class), references to all AttributeImpls that this stream uses should be retrieved during instantiation.

        To ensure that filters and consumers know which attributes are available, the attributes must be added during instantiation. Filters and consumers are not required to check for availability of attributes in TokenStream.incrementToken().

        Specified by:
        incrementToken in class TokenStream
        Returns:
        false for end of stream; true otherwise
        Throws:
        java.io.IOException
      • reset

        public void reset()
                   throws java.io.IOException
        Description copied from class: TokenFilter
        This method is called by a consumer before it begins consumption using TokenStream.incrementToken().

        Resets this stream to a clean state. Stateful implementations must implement this method so that they can be reused, just as if they had been created fresh.

        If you override this method, always call super.reset(), otherwise some internal state will not be correctly reset (e.g., Tokenizer will throw IllegalStateException on further usage).

        NOTE: The default implementation chains the call to the input TokenStream, so be sure to call super.reset() when overriding this method.

        Overrides:
        reset in class TokenFilter
        Throws:
        java.io.IOException