Save This Page
Home » lucene-3.0.1-src » org.apache » lucene » analysis » [javadoc | source]
org.apache.lucene.analysis
abstract public class: TokenStream [javadoc | source]
java.lang.Object
   org.apache.lucene.util.AttributeSource
      org.apache.lucene.analysis.TokenStream

All Implemented Interfaces:
    Closeable

A TokenStream enumerates the sequence of tokens, either from Field s of a Document or from query text.

This is an abstract class; concrete subclasses are:

A new TokenStream API has been introduced with Lucene 2.9. This API has moved from being Token -based to Attribute -based. While Token still exists in 2.9 as a convenience class, the preferred way to store the information of a Token is to use AttributeImpl s.

TokenStream now extends AttributeSource , which provides access to all of the token Attribute s for the TokenStream. Note that only one instance per AttributeImpl is created and reused for every token. This approach reduces object creation and allows local caching of references to the AttributeImpl s. See #incrementToken() for further details.

The workflow of the new TokenStream API is as follows:

  1. Instantiation of TokenStream/TokenFilter s which add/get attributes to/from the AttributeSource .
  2. The consumer calls TokenStream#reset() .
  3. The consumer retrieves attributes from the stream and stores local references to all attributes it wants to access.
  4. The consumer calls #incrementToken() until it returns false consuming the attributes after each call.
  5. The consumer calls #end() so that any end-of-stream operations can be performed.
  6. The consumer calls #close() to release any resource when finished using the TokenStream.
To make sure that filters and consumers know which attributes are available, the attributes must be added during instantiation. Filters and consumers are not required to check for availability of attributes in #incrementToken() .

You can find some example code for the new API in the analysis package level Javadoc.

Sometimes it is desirable to capture a current state of a TokenStream, e.g., for buffering purposes (see CachingTokenFilter , TeeSinkTokenFilter ). For this usecase AttributeSource#captureState and AttributeSource#restoreState can be used.
Constructor:
 protected TokenStream() 
 protected TokenStream(AttributeSource input) 
    A TokenStream that uses the same attributes as the supplied one.
 protected TokenStream(AttributeFactory factory) 
Method from org.apache.lucene.analysis.TokenStream Summary:
close,   end,   incrementToken,   reset
Methods from org.apache.lucene.util.AttributeSource:
addAttribute,   addAttributeImpl,   captureState,   clearAttributes,   cloneAttributes,   equals,   getAttribute,   getAttributeClassesIterator,   getAttributeFactory,   getAttributeImplsIterator,   hasAttribute,   hasAttributes,   hashCode,   restoreState,   toString
Methods from java.lang.Object:
clone,   equals,   finalize,   getClass,   hashCode,   notify,   notifyAll,   toString,   wait,   wait,   wait
Method from org.apache.lucene.analysis.TokenStream Detail:
 public  void close() throws IOException 
    Releases resources associated with this stream.
 public  void end() throws IOException 
    This method is called by the consumer after the last token has been consumed, after #incrementToken() returned false (using the new TokenStream API). Streams implementing the old API should upgrade to use this feature.

    This method can be used to perform any end-of-stream operations, such as setting the final offset of a stream. The final offset of a stream might differ from the offset of the last token eg in case one or more whitespaces followed after the last token, but a WhitespaceTokenizer was used.

 abstract public boolean incrementToken() throws IOException
    Consumers (i.e., IndexWriter ) use this method to advance the stream to the next token. Implementing classes must implement this method and update the appropriate AttributeImpl s with the attributes of the next token.

    The producer must make no assumptions about the attributes after the method has been returned: the caller may arbitrarily change it. If the producer needs to preserve the state for subsequent calls, it can use #captureState to create a copy of the current attribute state.

    This method is called for every token of a document, so an efficient implementation is crucial for good performance. To avoid calls to #addAttribute(Class) and #getAttribute(Class) , references to all AttributeImpl s that this stream uses should be retrieved during instantiation.

    To ensure that filters and consumers know which attributes are available, the attributes must be added during instantiation. Filters and consumers are not required to check for availability of attributes in #incrementToken() .

 public  void reset() throws IOException 
    Resets this stream to the beginning. This is an optional operation, so subclasses may or may not implement this method. #reset() is not needed for the standard indexing process. However, if the tokens of a TokenStream are intended to be consumed more than once, it is necessary to implement #reset() . Note that if your TokenStream caches tokens and feeds them back again after a reset, it is imperative that you clone the tokens when you store them away (on the first pass) as well as when you return them (on future passes after #reset() ).