Uses of Class
org.apache.lucene.analysis.TokenStream
-
Packages that use TokenStream Package Description org.apache.lucene.analysis Text analysis.org.apache.lucene.analysis.standard Fast, general-purpose grammar-based tokenizerStandardTokenizerimplements the Word Break rules from the Unicode Text Segmentation algorithm, as specified in Unicode Standard Annex #29.org.apache.lucene.document The logical representation of aDocumentfor indexing and searching.org.apache.lucene.index Code to maintain and access indices.org.apache.lucene.util Some utility classes.org.apache.lucene.util.graph Utility classes for working with token streams as graphs. -
-
Uses of TokenStream in org.apache.lucene.analysis
Subclasses of TokenStream in org.apache.lucene.analysis Modifier and Type Class Description classCachingTokenFilterThis class can be used if the token attributes of a TokenStream are intended to be consumed more than once.classFilteringTokenFilterAbstract base class for TokenFilters that may remove tokens.classGraphTokenFilterAn abstract TokenFilter that exposes its input stream as a graphclassLowerCaseFilterNormalizes token text to lower case.classStopFilterRemoves stop words from a token stream.classTokenFilterA TokenFilter is a TokenStream whose input is another TokenStream.classTokenizerA Tokenizer is a TokenStream whose input is a Reader.Fields in org.apache.lucene.analysis declared as TokenStream Modifier and Type Field Description protected TokenStreamTokenFilter. inputThe source of tokens for this filter.protected TokenStreamAnalyzer.TokenStreamComponents. sinkSink tokenstream, such as the outer tokenfilter decorating the chain.Methods in org.apache.lucene.analysis that return TokenStream Modifier and Type Method Description abstract TokenStreamTokenFilterFactory. create(TokenStream input)Transform the specified input TokenStreamTokenStreamAnalyzer.TokenStreamComponents. getTokenStream()Returns the sinkTokenStreamprotected TokenStreamAnalyzer. normalize(String fieldName, TokenStream in)Wrap the givenTokenStreamin order to apply normalization filters.protected TokenStreamAnalyzerWrapper. normalize(String fieldName, TokenStream in)TokenStreamTokenFilterFactory. normalize(TokenStream input)Normalize the specified input TokenStream While the default implementation returns input unchanged, filters that should be applied at normalization time can delegate tocreatemethod.TokenStreamAnalyzer. tokenStream(String fieldName, Reader reader)Returns a TokenStream suitable forfieldName, tokenizing the contents ofreader.TokenStreamAnalyzer. tokenStream(String fieldName, String text)Returns a TokenStream suitable forfieldName, tokenizing the contents oftext.static TokenStreamAutomatonToTokenStream. toTokenStream(Automaton automaton)converts an automaton into a TokenStream.TokenStreamTokenFilter. unwrap()protected TokenStreamAnalyzerWrapper. wrapTokenStreamForNormalization(String fieldName, TokenStream in)Wraps / alters the given TokenStream for normalization purposes, taken from the wrapped Analyzer, to form new components.protected TokenStreamDelegatingAnalyzerWrapper. wrapTokenStreamForNormalization(String fieldName, TokenStream in)Methods in org.apache.lucene.analysis with parameters of type TokenStream Modifier and Type Method Description abstract TokenStreamTokenFilterFactory. create(TokenStream input)Transform the specified input TokenStreamprotected TokenStreamAnalyzer. normalize(String fieldName, TokenStream in)Wrap the givenTokenStreamin order to apply normalization filters.protected TokenStreamAnalyzerWrapper. normalize(String fieldName, TokenStream in)TokenStreamTokenFilterFactory. normalize(TokenStream input)Normalize the specified input TokenStream While the default implementation returns input unchanged, filters that should be applied at normalization time can delegate tocreatemethod.AutomatonTokenStreamToAutomaton. toAutomaton(TokenStream in)Pulls the graph (includingPositionLengthAttribute) from the providedTokenStream, and creates the corresponding automaton where arcs are bytes (or Unicode code points if unicodeArcs = true) from each term.protected TokenStreamAnalyzerWrapper. wrapTokenStreamForNormalization(String fieldName, TokenStream in)Wraps / alters the given TokenStream for normalization purposes, taken from the wrapped Analyzer, to form new components.protected TokenStreamDelegatingAnalyzerWrapper. wrapTokenStreamForNormalization(String fieldName, TokenStream in)Constructors in org.apache.lucene.analysis with parameters of type TokenStream Constructor Description CachingTokenFilter(TokenStream input)Create a new CachingTokenFilter aroundinput.FilteringTokenFilter(TokenStream in)Create a newFilteringTokenFilter.GraphTokenFilter(TokenStream input)Create a new GraphTokenFilterLowerCaseFilter(TokenStream in)Create a new LowerCaseFilter, that normalizes token text to lower case.StopFilter(TokenStream in, CharArraySet stopWords)Constructs a filter which removes words from the input TokenStream that are named in the Set.TokenFilter(TokenStream input)Construct a token stream filtering the given input.TokenStreamComponents(Consumer<Reader> source, TokenStream result)Creates a newAnalyzer.TokenStreamComponentsinstance.TokenStreamComponents(Tokenizer tokenizer, TokenStream result)Creates a newAnalyzer.TokenStreamComponentsinstance -
Uses of TokenStream in org.apache.lucene.analysis.standard
Subclasses of TokenStream in org.apache.lucene.analysis.standard Modifier and Type Class Description classStandardTokenizerA grammar-based tokenizer constructed with JFlex.Methods in org.apache.lucene.analysis.standard that return TokenStream Modifier and Type Method Description protected TokenStreamStandardAnalyzer. normalize(String fieldName, TokenStream in)Methods in org.apache.lucene.analysis.standard with parameters of type TokenStream Modifier and Type Method Description protected TokenStreamStandardAnalyzer. normalize(String fieldName, TokenStream in) -
Uses of TokenStream in org.apache.lucene.document
Fields in org.apache.lucene.document declared as TokenStream Modifier and Type Field Description protected TokenStreamField. tokenStreamPre-analyzed tokenStream for indexed fields; this is separate from fieldsData because you are allowed to have both; eg maybe field has a String value but you customize how it's tokenizedMethods in org.apache.lucene.document that return TokenStream Modifier and Type Method Description TokenStreamFeatureField. tokenStream(Analyzer analyzer, TokenStream reuse)TokenStreamField. tokenStream(Analyzer analyzer, TokenStream reuse)TokenStreamShapeDocValuesField. tokenStream(Analyzer analyzer, TokenStream reuse)TokenStreams are not yet supportedTokenStreamField. tokenStreamValue()The TokenStream for this field to be used when indexing, or null.Methods in org.apache.lucene.document with parameters of type TokenStream Modifier and Type Method Description voidField. setTokenStream(TokenStream tokenStream)Expert: sets the token stream to be used for indexing and causes isIndexed() and isTokenized() to return true.TokenStreamFeatureField. tokenStream(Analyzer analyzer, TokenStream reuse)TokenStreamField. tokenStream(Analyzer analyzer, TokenStream reuse)TokenStreamShapeDocValuesField. tokenStream(Analyzer analyzer, TokenStream reuse)TokenStreams are not yet supportedConstructors in org.apache.lucene.document with parameters of type TokenStream Constructor Description Field(String name, TokenStream tokenStream, IndexableFieldType type)Create field with TokenStream value.TextField(String name, TokenStream stream)Creates a new un-stored TextField with TokenStream value. -
Uses of TokenStream in org.apache.lucene.index
Methods in org.apache.lucene.index that return TokenStream Modifier and Type Method Description TokenStreamIndexableField. tokenStream(Analyzer analyzer, TokenStream reuse)Creates the TokenStream used for indexing this field.Methods in org.apache.lucene.index with parameters of type TokenStream Modifier and Type Method Description TokenStreamIndexableField. tokenStream(Analyzer analyzer, TokenStream reuse)Creates the TokenStream used for indexing this field. -
Uses of TokenStream in org.apache.lucene.util
Methods in org.apache.lucene.util with parameters of type TokenStream Modifier and Type Method Description protected QueryQueryBuilder. analyzeBoolean(String field, TokenStream stream)Creates simple boolean query from the cached tokenstream contentsprotected QueryQueryBuilder. analyzeGraphBoolean(String field, TokenStream source, BooleanClause.Occur operator)Creates a boolean query from a graph token stream.protected QueryQueryBuilder. analyzeGraphPhrase(TokenStream source, String field, int phraseSlop)Creates graph phrase query from the tokenstream contentsprotected QueryQueryBuilder. analyzeMultiBoolean(String field, TokenStream stream, BooleanClause.Occur operator)Creates complex boolean query from the cached tokenstream contentsprotected QueryQueryBuilder. analyzeMultiPhrase(String field, TokenStream stream, int slop)Creates complex phrase query from the cached tokenstream contentsprotected QueryQueryBuilder. analyzePhrase(String field, TokenStream stream, int slop)Creates simple phrase query from the cached tokenstream contentsprotected QueryQueryBuilder. analyzeTerm(String field, TokenStream stream)Creates simple term query from the cached tokenstream contentsprotected QueryQueryBuilder. createFieldQuery(TokenStream source, BooleanClause.Occur operator, String field, boolean quoted, int phraseSlop)Creates a query from a token stream. -
Uses of TokenStream in org.apache.lucene.util.graph
Methods in org.apache.lucene.util.graph that return types with arguments of type TokenStream Modifier and Type Method Description Iterator<TokenStream>GraphTokenStreamFiniteStrings. getFiniteStrings()Get all finite strings from the automaton.Iterator<TokenStream>GraphTokenStreamFiniteStrings. getFiniteStrings(int startState, int endState)Get all finite strings that start atstartStateand end atendState.Constructors in org.apache.lucene.util.graph with parameters of type TokenStream Constructor Description GraphTokenStreamFiniteStrings(TokenStream in)
-