Method from org.apache.lucene.analysis.Analyzer Detail: |
public void close() {
tokenStreams.close();
tokenStreams = null;
}
Frees persistent resources used by this Analyzer |
public int getOffsetGap(Fieldable field) {
if (field.isTokenized())
return 1;
else
return 0;
}
Just like #getPositionIncrementGap , except for
Token offsets instead. By default this returns 1 for
tokenized fields and, as if the fields were joined
with an extra space character, and 0 for un-tokenized
fields. This method is only called if the field
produced at least one token for indexing. |
public int getPositionIncrementGap(String fieldName) {
return 0;
}
Invoked before indexing a Fieldable instance if
terms have already been added to that field. This allows custom
analyzers to place an automatic position increment gap between
Fieldable instances using the same field name. The default value
position increment gap is 0. With a 0 position increment gap and
the typical default token position increment of 1, all terms in a field,
including across Fieldable instances, are in successive positions, allowing
exact PhraseQuery matches, for instance, across Fieldable instance boundaries. |
protected Object getPreviousTokenStream() {
try {
return tokenStreams.get();
} catch (NullPointerException npe) {
if (tokenStreams == null) {
throw new AlreadyClosedException("this Analyzer is closed");
} else {
throw npe;
}
}
}
Used by Analyzers that implement reusableTokenStream
to retrieve previously saved TokenStreams for re-use
by the same thread. |
public TokenStream reusableTokenStream(String fieldName,
Reader reader) throws IOException {
return tokenStream(fieldName, reader);
}
Creates a TokenStream that is allowed to be re-used
from the previous time that the same thread called
this method. Callers that do not need to use more
than one TokenStream at the same time from this
analyzer should use this method for better
performance. |
protected void setOverridesTokenStreamMethod(Class<Analyzer> baseClass) {
try {
Method m = this.getClass().getMethod("tokenStream", String.class, Reader.class);
overridesTokenStreamMethod = m.getDeclaringClass() != baseClass;
} catch (NoSuchMethodException nsme) {
// cannot happen, as baseClass is subclass of Analyzer through generics
overridesTokenStreamMethod = false;
}
} Deprecated! This - is only present to preserve
back-compat of classes that subclass a core analyzer
and override tokenStream but not reusableTokenStream
|
protected void setPreviousTokenStream(Object obj) {
try {
tokenStreams.set(obj);
} catch (NullPointerException npe) {
if (tokenStreams == null) {
throw new AlreadyClosedException("this Analyzer is closed");
} else {
throw npe;
}
}
}
Used by Analyzers that implement reusableTokenStream
to save a TokenStream for later re-use by the same
thread. |
abstract public TokenStream tokenStream(String fieldName,
Reader reader)
Creates a TokenStream which tokenizes all the text in the provided
Reader. Must be able to handle null field name for
backward compatibility. |