- All Implemented Interfaces:
public final class LimitTokenCountFilter extends TokenFilterThis TokenFilter limits the number of tokens while indexing. It is a replacement for the maximum field length setting inside
By default, this filter ignores any tokens in the wrapped
TokenStreamonce the limit has been reached, which can result in
reset()being called prior to
false. For most
TokenStreamimplementations this should be acceptable, and faster then consuming the full stream. If you are wrapping a
TokenStreamwhich requires that the full stream of tokens be exhausted in order to function properly, use the
Constructors Constructor Description
LimitTokenCountFilter(TokenStream in, int maxTokenCount)Build a filter that only accepts tokens up to a maximum number.
LimitTokenCountFilter(TokenStream in, int maxTokenCount, boolean consumeAllTokens)Build an filter that limits the maximum number of tokens per field.
All Methods Instance Methods Concrete Methods Modifier and Type Method Description
IndexWriter) use this method to advance the stream to the next token.
reset()This method is called by a consumer before it begins consumption using
Methods inherited from class org.apache.lucene.util.AttributeSource
addAttribute, addAttributeImpl, captureState, clearAttributes, cloneAttributes, copyTo, equals, getAttribute, getAttributeClassesIterator, getAttributeFactory, getAttributeImplsIterator, hasAttribute, hasAttributes, hashCode, reflectAsString, reflectWith, restoreState, toString
public LimitTokenCountFilter(TokenStream in, int maxTokenCount)Build a filter that only accepts tokens up to a maximum number. This filter will not consume any tokens beyond the maxTokenCount limit
public LimitTokenCountFilter(TokenStream in, int maxTokenCount, boolean consumeAllTokens)Build an filter that limits the maximum number of tokens per field.
in- the stream to wrap
maxTokenCount- max number of tokens to produce
consumeAllTokens- whether all tokens from the input must be consumed even if maxTokenCount is reached.
public boolean incrementToken() throws java.io.IOExceptionDescription copied from class:
IndexWriter) use this method to advance the stream to the next token. Implementing classes must implement this method and update the appropriate
AttributeImpls with the attributes of the next token.
The producer must make no assumptions about the attributes after the method has been returned: the caller may arbitrarily change it. If the producer needs to preserve the state for subsequent calls, it can use
AttributeSource.captureState()to create a copy of the current attribute state.
This method is called for every token of a document, so an efficient implementation is crucial for good performance. To avoid calls to
AttributeSource.getAttribute(Class), references to all
AttributeImpls that this stream uses should be retrieved during instantiation.
To ensure that filters and consumers know which attributes are available, the attributes must be added during instantiation. Filters and consumers are not required to check for availability of attributes in
public void reset() throws java.io.IOExceptionDescription copied from class:
TokenFilterThis method is called by a consumer before it begins consumption using
Resets this stream to a clean state. Stateful implementations must implement this method so that they can be reused, just as if they had been created fresh.
If you override this method, always call
super.reset(), otherwise some internal state will not be correctly reset (e.g.,
IllegalStateExceptionon further usage).
NOTE: The default implementation chains the call to the input TokenStream, so be sure to call
super.reset()when overriding this method.