Class UAX29URLEmailTokenizer
- java.lang.Object
-
- org.apache.lucene.util.AttributeSource
-
- org.apache.lucene.analysis.TokenStream
-
- org.apache.lucene.analysis.Tokenizer
-
- org.apache.lucene.analysis.standard.UAX29URLEmailTokenizer
-
- All Implemented Interfaces:
java.io.Closeable,java.lang.AutoCloseable
public final class UAX29URLEmailTokenizer extends Tokenizer
This class implements Word Break rules from the Unicode Text Segmentation algorithm, as specified in ` Unicode Standard Annex #29 URLs and email addresses are also tokenized according to the relevant RFCs. Tokens produced are of the following types:- <ALPHANUM>: A sequence of alphabetic and numeric characters
- <NUM>: A number
- <URL>: A URL
- <EMAIL>: An email address
- <SOUTHEAST_ASIAN>: A sequence of characters from South and Southeast Asian languages, including Thai, Lao, Myanmar, and Khmer
- <IDEOGRAPHIC>: A single CJKV ideographic character
- <HIRAGANA>: A single hiragana character
You must specify the required
Versioncompatibility when creating UAX29URLEmailTokenizer:- As of 3.4, Hiragana and Han characters are no longer wrongly split from their combining characters. If you use a previous version number, you get the exact broken behavior for backwards compatibility.
-
-
Nested Class Summary
-
Nested classes/interfaces inherited from class org.apache.lucene.util.AttributeSource
AttributeSource.AttributeFactory, AttributeSource.State
-
-
Field Summary
Fields Modifier and Type Field Description static intALPHANUMstatic intEMAILstatic intHANGULstatic intHIRAGANAstatic intIDEOGRAPHICstatic intKATAKANAstatic intNUMstatic intSOUTHEAST_ASIANstatic java.lang.String[]TOKEN_TYPESString token types that correspond to token type int constantsstatic intURL
-
Constructor Summary
Constructors Constructor Description UAX29URLEmailTokenizer(Version matchVersion, java.io.Reader input)Creates a new instance of the UAX29URLEmailTokenizer.UAX29URLEmailTokenizer(Version matchVersion, AttributeSource.AttributeFactory factory, java.io.Reader input)Creates a new UAX29URLEmailTokenizer with a givenAttributeSource.AttributeFactory
-
Method Summary
All Methods Instance Methods Concrete Methods Modifier and Type Method Description voidclose()Releases resources associated with this stream.voidend()This method is called by the consumer after the last token has been consumed, afterTokenStream.incrementToken()returnedfalse(using the newTokenStreamAPI).intgetMaxTokenLength()booleanincrementToken()Consumers (i.e.,IndexWriter) use this method to advance the stream to the next token.voidreset()This method is called by a consumer before it begins consumption usingTokenStream.incrementToken().voidsetMaxTokenLength(int length)Set the max allowed token length.-
Methods inherited from class org.apache.lucene.util.AttributeSource
addAttribute, addAttributeImpl, captureState, clearAttributes, cloneAttributes, copyTo, equals, getAttribute, getAttributeClassesIterator, getAttributeFactory, getAttributeImplsIterator, hasAttribute, hasAttributes, hashCode, reflectAsString, reflectWith, restoreState, toString
-
-
-
-
Field Detail
-
ALPHANUM
public static final int ALPHANUM
- See Also:
- Constant Field Values
-
NUM
public static final int NUM
- See Also:
- Constant Field Values
-
SOUTHEAST_ASIAN
public static final int SOUTHEAST_ASIAN
- See Also:
- Constant Field Values
-
IDEOGRAPHIC
public static final int IDEOGRAPHIC
- See Also:
- Constant Field Values
-
HIRAGANA
public static final int HIRAGANA
- See Also:
- Constant Field Values
-
KATAKANA
public static final int KATAKANA
- See Also:
- Constant Field Values
-
HANGUL
public static final int HANGUL
- See Also:
- Constant Field Values
-
URL
public static final int URL
- See Also:
- Constant Field Values
-
EMAIL
public static final int EMAIL
- See Also:
- Constant Field Values
-
TOKEN_TYPES
public static final java.lang.String[] TOKEN_TYPES
String token types that correspond to token type int constants
-
-
Constructor Detail
-
UAX29URLEmailTokenizer
public UAX29URLEmailTokenizer(Version matchVersion, java.io.Reader input)
Creates a new instance of the UAX29URLEmailTokenizer. Attaches theinputto the newly created JFlex scanner.- Parameters:
input- The input reader
-
UAX29URLEmailTokenizer
public UAX29URLEmailTokenizer(Version matchVersion, AttributeSource.AttributeFactory factory, java.io.Reader input)
Creates a new UAX29URLEmailTokenizer with a givenAttributeSource.AttributeFactory
-
-
Method Detail
-
setMaxTokenLength
public void setMaxTokenLength(int length)
Set the max allowed token length. Any token longer than this is skipped.
-
getMaxTokenLength
public int getMaxTokenLength()
- See Also:
setMaxTokenLength(int)
-
incrementToken
public final boolean incrementToken() throws java.io.IOExceptionDescription copied from class:TokenStreamConsumers (i.e.,IndexWriter) use this method to advance the stream to the next token. Implementing classes must implement this method and update the appropriateAttributeImpls with the attributes of the next token.The producer must make no assumptions about the attributes after the method has been returned: the caller may arbitrarily change it. If the producer needs to preserve the state for subsequent calls, it can use
AttributeSource.captureState()to create a copy of the current attribute state.This method is called for every token of a document, so an efficient implementation is crucial for good performance. To avoid calls to
AttributeSource.addAttribute(Class)andAttributeSource.getAttribute(Class), references to allAttributeImpls that this stream uses should be retrieved during instantiation.To ensure that filters and consumers know which attributes are available, the attributes must be added during instantiation. Filters and consumers are not required to check for availability of attributes in
TokenStream.incrementToken().- Specified by:
incrementTokenin classTokenStream- Returns:
- false for end of stream; true otherwise
- Throws:
java.io.IOException
-
end
public final void end() throws java.io.IOExceptionDescription copied from class:TokenStreamThis method is called by the consumer after the last token has been consumed, afterTokenStream.incrementToken()returnedfalse(using the newTokenStreamAPI). Streams implementing the old API should upgrade to use this feature. This method can be used to perform any end-of-stream operations, such as setting the final offset of a stream. The final offset of a stream might differ from the offset of the last token eg in case one or more whitespaces followed after the last token, but a WhitespaceTokenizer was used.Additionally any skipped positions (such as those removed by a stopfilter) can be applied to the position increment, or any adjustment of other attributes where the end-of-stream value may be important.
If you override this method, always call
super.end().- Overrides:
endin classTokenStream- Throws:
java.io.IOException- If an I/O error occurs
-
close
public void close() throws java.io.IOExceptionDescription copied from class:TokenizerReleases resources associated with this stream.If you override this method, always call
super.close(), otherwise some internal state will not be correctly reset (e.g.,Tokenizerwill throwIllegalStateExceptionon reuse).NOTE: The default implementation closes the input Reader, so be sure to call
super.close()when overriding this method.
-
reset
public void reset() throws java.io.IOExceptionDescription copied from class:TokenStreamThis method is called by a consumer before it begins consumption usingTokenStream.incrementToken().Resets this stream to a clean state. Stateful implementations must implement this method so that they can be reused, just as if they had been created fresh.
If you override this method, always call
super.reset(), otherwise some internal state will not be correctly reset (e.g.,Tokenizerwill throwIllegalStateExceptionon further usage).
-
-