Package org.apache.lucene.analysis.ngram
Class EdgeNGramTokenizer
- java.lang.Object
-
- org.apache.lucene.util.AttributeSource
-
- org.apache.lucene.analysis.TokenStream
-
- org.apache.lucene.analysis.Tokenizer
-
- org.apache.lucene.analysis.ngram.NGramTokenizer
-
- org.apache.lucene.analysis.ngram.EdgeNGramTokenizer
-
- All Implemented Interfaces:
java.io.Closeable,java.lang.AutoCloseable
public class EdgeNGramTokenizer extends NGramTokenizer
Tokenizes the input from an edge into n-grams of given size(s).This
Tokenizercreate n-grams from the beginning edge or ending edge of a input token.As of Lucene 4.4, this tokenizer
- can handle
maxGramlarger than 1024 chars, but beware that this will result in increased memory usage - doesn't trim the input,
- sets position increments equal to 1 instead of 1 for the first token and 0 for all other ones
- doesn't support backward n-grams anymore.
- supports
pre-tokenization, - correctly handles supplementary characters.
Although highly discouraged, it is still possible to use the old behavior through
Lucene43EdgeNGramTokenizer.
-
-
Nested Class Summary
-
Nested classes/interfaces inherited from class org.apache.lucene.util.AttributeSource
AttributeSource.AttributeFactory, AttributeSource.State
-
-
Field Summary
Fields Modifier and Type Field Description static intDEFAULT_MAX_GRAM_SIZEstatic intDEFAULT_MIN_GRAM_SIZE-
Fields inherited from class org.apache.lucene.analysis.ngram.NGramTokenizer
DEFAULT_MAX_NGRAM_SIZE, DEFAULT_MIN_NGRAM_SIZE
-
-
Constructor Summary
Constructors Constructor Description EdgeNGramTokenizer(Version version, java.io.Reader input, int minGram, int maxGram)Creates EdgeNGramTokenizer that can generate n-grams in the sizes of the given rangeEdgeNGramTokenizer(Version version, AttributeSource.AttributeFactory factory, java.io.Reader input, int minGram, int maxGram)Creates EdgeNGramTokenizer that can generate n-grams in the sizes of the given range
-
Method Summary
-
Methods inherited from class org.apache.lucene.analysis.ngram.NGramTokenizer
end, incrementToken, reset
-
Methods inherited from class org.apache.lucene.util.AttributeSource
addAttribute, addAttributeImpl, captureState, clearAttributes, cloneAttributes, copyTo, equals, getAttribute, getAttributeClassesIterator, getAttributeFactory, getAttributeImplsIterator, hasAttribute, hasAttributes, hashCode, reflectAsString, reflectWith, restoreState, toString
-
-
-
-
Field Detail
-
DEFAULT_MAX_GRAM_SIZE
public static final int DEFAULT_MAX_GRAM_SIZE
- See Also:
- Constant Field Values
-
DEFAULT_MIN_GRAM_SIZE
public static final int DEFAULT_MIN_GRAM_SIZE
- See Also:
- Constant Field Values
-
-
Constructor Detail
-
EdgeNGramTokenizer
public EdgeNGramTokenizer(Version version, java.io.Reader input, int minGram, int maxGram)
Creates EdgeNGramTokenizer that can generate n-grams in the sizes of the given range- Parameters:
version- the Lucene match versioninput-Readerholding the input to be tokenizedminGram- the smallest n-gram to generatemaxGram- the largest n-gram to generate
-
EdgeNGramTokenizer
public EdgeNGramTokenizer(Version version, AttributeSource.AttributeFactory factory, java.io.Reader input, int minGram, int maxGram)
Creates EdgeNGramTokenizer that can generate n-grams in the sizes of the given range- Parameters:
version- the Lucene match versionfactory-AttributeSource.AttributeFactoryto useinput-Readerholding the input to be tokenizedminGram- the smallest n-gram to generatemaxGram- the largest n-gram to generate
-
-