All Classes Interface Summary Class Summary Enum Summary Exception Summary Error Summary Annotation Types Summary
Class |
Description |
AbstractAnalysisFactory |
|
AbstractBlockPackedWriter |
|
AbstractDictionary |
SmartChineseAnalyzer abstract dictionary implementation.
|
AbstractEncoder |
Base class for payload encoders.
|
AbstractPagedMutable<T extends AbstractPagedMutable<T>> |
|
AbstractQueryConfig |
|
AbstractRangeQueryNode<T extends FieldValuePairQueryNode<?>> |
This class should be extended by nodes intending to represent range queries.
|
Accountable |
An object whose RAM usage can be computed.
|
Accountables |
Helper methods for constructing nested resource descriptions
and debugging RAM usage.
|
AfterEffect |
This class acts as the base class for the implementations of the first
normalization of the informative content in the DFR framework.
|
AfterEffectB |
Model of the information gain based on the ratio of two Bernoulli processes.
|
AfterEffectL |
Model of the information gain based on Laplace's law of succession.
|
AllGroupHeadsCollector<T> |
This collector specializes in collecting the most relevant document (group head) for each
group that matches the query.
|
AllGroupHeadsCollector.GroupHead<T> |
Represents a group head.
|
AllGroupHeadsCollector.ScoringGroupHead<T> |
|
AllGroupHeadsCollector.ScoringGroupHeadsCollector<T> |
Specialized implementation for sorting by score
|
AllGroupHeadsCollector.SortingGroupHead<T> |
|
AllGroupHeadsCollector.SortingGroupHeadsCollector<T> |
|
AllGroupsCollector<T> |
A collector that collects all groups that match the
query.
|
AllowLeadingWildcardProcessor |
|
AlreadyClosedException |
This exception is thrown when there is an attempt to
access something that has already been closed.
|
Among |
This is the rev 502 of the Snowball SVN trunk,
now located at GitHub,
but modified:
made abstract and introduced abstract method stem to avoid expensive reflection in filter class.
|
AnalysisOffsetStrategy |
Provides a base class for analysis based offset strategies to extend from.
|
AnalysisOffsetStrategy.MultiValueTokenStream |
Wraps an Analyzer and string text that represents multiple values delimited by a specified character.
|
AnalysisSPILoader<S extends AbstractAnalysisFactory> |
Helper class for loading named SPIs from classpath (e.g.
|
Analyzer |
An Analyzer builds TokenStreams, which analyze text.
|
Analyzer.ReuseStrategy |
|
Analyzer.StringTokenStream |
|
Analyzer.TokenStreamComponents |
This class encapsulates the outer components of a token stream.
|
AnalyzerProfile |
Manages analysis data configuration for SmartChineseAnalyzer
|
AnalyzerQueryNodeProcessor |
|
AnalyzerWrapper |
Extension to Analyzer suitable for Analyzers which wrap
other Analyzers.
|
AnalyzingInfixSuggester |
Analyzes the input text and then suggests matches based
on prefix matches to any tokens in the indexed text.
|
AnalyzingSuggester |
Suggester that first analyzes the surface form, adds the
analyzed form to a weighted FST, and then does the same
thing at lookup time.
|
AnalyzingSuggester.AnalyzingComparator |
|
AndQuery |
Factory for conjunctions
|
AndQueryNode |
A AndQueryNode represents an AND boolean operation performed on a
list of nodes.
|
AnyQueryNode |
A AnyQueryNode represents an ANY operator performed on a list of
nodes.
|
AnyQueryNodeBuilder |
Builds a BooleanQuery of SHOULD clauses, possibly with
some minimum number to match.
|
ApostropheFilter |
Strips all characters after an apostrophe (including the apostrophe itself).
|
ApostropheFilterFactory |
|
ArabicAnalyzer |
|
ArabicAnalyzer.DefaultSetHolder |
Atomically loads the DEFAULT_STOP_SET in a lazy fashion once the outer class
accesses the static final set the first time.;
|
ArabicNormalizationFilter |
|
ArabicNormalizationFilterFactory |
|
ArabicNormalizer |
Normalizer for Arabic.
|
ArabicStemFilter |
|
ArabicStemFilterFactory |
|
ArabicStemmer |
Stemmer for Arabic.
|
ArabicStemmer |
This class was automatically generated by a Snowball to Java compiler
It implements the stemming algorithm defined by a snowball script.
|
ArmenianAnalyzer |
|
ArmenianAnalyzer.DefaultSetHolder |
Atomically loads the DEFAULT_STOP_SET in a lazy fashion once the outer class
accesses the static final set the first time.;
|
ArmenianStemmer |
This class was automatically generated by a Snowball to Java compiler
It implements the stemming algorithm defined by a snowball script.
|
ArrayInPlaceMergeSorter<T> |
|
ArrayIntroSorter<T> |
|
ArrayTimSorter<T> |
|
ArrayUtil |
Methods for manipulating arrays.
|
ASCIIFoldingFilter |
This class converts alphabetic, numeric, and symbolic Unicode characters
which are not in the first 127 ASCII characters (the "Basic Latin" Unicode
block) into their ASCII equivalents, if one exists.
|
ASCIIFoldingFilterFactory |
|
Attribute |
Base interface for attributes.
|
AttributeFactory |
|
AttributeFactory.DefaultAttributeFactory |
|
AttributeFactory.StaticImplementationAttributeFactory<A extends AttributeImpl> |
Expert: AttributeFactory returning an instance of the given clazz for the
attributes it implements.
|
AttributeImpl |
|
AttributeReflector |
|
AttributeSource |
An AttributeSource contains a list of different AttributeImpl s,
and methods to add and get them.
|
AttributeSource.State |
This class holds the state of an AttributeSource.
|
Automata |
Construction of basic automata.
|
Automaton |
Represents an automaton and all its states and transitions.
|
Automaton.Builder |
|
AutomatonProvider |
|
AutomatonQuery |
A Query that will match terms against a finite-state machine.
|
AutomatonTermsEnum |
A FilteredTermsEnum that enumerates terms based upon what is accepted by a
DFA.
|
AveragePayloadFunction |
Calculate the final score as the average score of all payloads seen.
|
Axiomatic |
Axiomatic approaches for IR.
|
AxiomaticF1EXP |
F1EXP is defined as Sum(tf(term_doc_freq)*ln(docLen)*IDF(term))
where IDF(t) = pow((N+1)/df(t), k) N=total num of docs, df=doc freq
|
AxiomaticF1LOG |
F1LOG is defined as Sum(tf(term_doc_freq)*ln(docLen)*IDF(term))
where IDF(t) = ln((N+1)/df(t)) N=total num of docs, df=doc freq
|
AxiomaticF2EXP |
F2EXP is defined as Sum(tfln(term_doc_freq, docLen)*IDF(term))
where IDF(t) = pow((N+1)/df(t), k) N=total num of docs, df=doc freq
|
AxiomaticF2LOG |
F2EXP is defined as Sum(tfln(term_doc_freq, docLen)*IDF(term))
where IDF(t) = ln((N+1)/df(t)) N=total num of docs, df=doc freq
|
AxiomaticF3EXP |
F3EXP is defined as Sum(tf(term_doc_freq)*IDF(term)-gamma(docLen, queryLen))
where IDF(t) = pow((N+1)/df(t), k) N=total num of docs, df=doc freq
gamma(docLen, queryLen) = (docLen-queryLen)*queryLen*s/avdl
NOTE: the gamma function of this similarity creates negative scores
|
AxiomaticF3LOG |
F3EXP is defined as Sum(tf(term_doc_freq)*IDF(term)-gamma(docLen, queryLen))
where IDF(t) = ln((N+1)/df(t)) N=total num of docs, df=doc freq
gamma(docLen, queryLen) = (docLen-queryLen)*queryLen*s/avdl
NOTE: the gamma function of this similarity creates negative scores
|
BaseCharFilter |
|
BaseCompositeReader<R extends IndexReader> |
Base class for implementing CompositeReader s based on an array
of sub-readers.
|
BaseDirectory |
|
BaseFormAttribute |
|
BaseFormAttributeImpl |
|
BaseFragListBuilder |
|
BaseFragListBuilder.IteratorQueue<T> |
|
BaseFragmentsBuilder |
Base FragmentsBuilder implementation that supports colored pre/post
tags and multivalued fields.
|
BaseGlobalOrdinalScorer |
|
BaseTermsEnum |
|
BasicModel |
This class acts as the base class for the specific basic model
implementations in the DFR framework.
|
BasicModelG |
Geometric as limiting form of the Bose-Einstein model.
|
BasicModelIF |
An approximation of the I(ne) model.
|
BasicModelIn |
The basic tf-idf model of randomness.
|
BasicModelIne |
Tf-idf model of randomness, based on a mixture of Poisson and inverse
document frequency.
|
BasicQueryFactory |
Factory for creating basic term queries
|
BasicStats |
Stores all statistics commonly used ranking methods.
|
BasqueAnalyzer |
|
BasqueAnalyzer.DefaultSetHolder |
Atomically loads the DEFAULT_STOP_SET in a lazy fashion once the outer class
accesses the static final set the first time.;
|
BasqueStemmer |
This class was automatically generated by a Snowball to Java compiler
It implements the stemming algorithm defined by a snowball script.
|
BeiderMorseFilter |
TokenFilter for Beider-Morse phonetic encoding.
|
BeiderMorseFilterFactory |
|
BengaliAnalyzer |
Analyzer for Bengali.
|
BengaliAnalyzer.DefaultSetHolder |
Atomically loads the DEFAULT_STOP_SET in a lazy fashion once the outer class
accesses the static final set the first time.;
|
BengaliNormalizationFilter |
|
BengaliNormalizationFilterFactory |
|
BengaliNormalizer |
Normalizer for Bengali.
|
BengaliStemFilter |
|
BengaliStemFilterFactory |
|
BengaliStemmer |
Stemmer for Bengali.
|
BigIntegerPoint |
An indexed 128-bit BigInteger field.
|
BigramDictionary |
SmartChineseAnalyzer Bigram dictionary.
|
BinaryDictionary |
Base class for a binary-encoded in-memory dictionary.
|
BinaryDictionary |
Base class for a binary-encoded in-memory dictionary.
|
BinaryDictionary.ResourceScheme |
Used to specify where (dictionary) resources get loaded from.
|
BinaryDictionary.ResourceScheme |
Used to specify where (dictionary) resources get loaded from.
|
BinaryDictionaryWriter |
|
BinaryDictionaryWriter |
|
BinaryDocValues |
A per-document numeric value.
|
BinaryDocValuesField |
Field that stores a per-document BytesRef value.
|
BinaryDocValuesFieldUpdates |
|
BinaryDocValuesFieldUpdates.Iterator |
|
BinaryDocValuesWriter |
Buffers up pending byte[] per doc, then flushes when
segment flushes.
|
BinaryDocValuesWriter.BufferedBinaryDocValues |
|
BinaryPoint |
An indexed binary field for fast range filters.
|
BinaryRangeDocValues |
|
BinaryRangeDocValuesField |
|
BinaryRangeFieldRangeQuery |
|
Bindings |
Binds variable names in expressions to actual data.
|
BiSegGraph |
Graph representing possible token pairs (bigrams) at each start offset in the sentence.
|
BitDocIdSet |
|
Bits |
Interface for Bitset-like structures.
|
Bits.MatchAllBits |
Bits impl of the specified length with all bits set.
|
Bits.MatchNoBits |
Bits impl of the specified length with no bits set.
|
BitSet |
Base implementation for a bit set.
|
BitSetIterator |
|
BitSetProducer |
A producer of BitSet s per segment.
|
BitsProducer |
A producer of Bits per segment.
|
BitsSlice |
Exposes a slice of an existing Bits as a new Bits.
|
BitTableUtil |
|
BitUtil |
A variety of high efficiency bit twiddling routines.
|
BKDRadixSelector |
Offline Radix selector for BKD tree.
|
BKDRadixSelector.PathSlice |
Sliced reference to points in an PointWriter.
|
BKDReader |
Handles intersection of an multi-dimensional shape in byte[] space with a block KD-tree previously written with BKDWriter .
|
BKDReader.BKDReaderDocIDSetIterator |
|
BKDReader.IntersectState |
|
BKDWriter |
Recursively builds a block KD-tree to assign all incoming points in N-dim space to smaller
and smaller N-dim rectangles (cells) until the number of points in a given
rectangle is <= maxPointsInLeafNode .
|
BKDWriter.BKDMergeQueue |
|
BKDWriter.BKDTreeLeafNodes |
flat representation of a kd-tree
|
BKDWriter.MergeReader |
|
BlendedInfixSuggester |
Extension of the AnalyzingInfixSuggester which transforms the weight
after search to take into account the position of the searched term into
the indexed text.
|
BlendedInfixSuggester.BlenderType |
The different types of blender.
|
BlendedInfixSuggester.LookUpComparator |
|
BlendedTermQuery |
A Query that blends index statistics across multiple terms.
|
BlendedTermQuery.Builder |
|
BlendedTermQuery.DisjunctionMaxRewrite |
|
BlendedTermQuery.RewriteMethod |
|
BlockDecoder |
Decodes the raw bytes of a block when the index is read, according to the
BlockEncoder used during the writing of the index.
|
BlockEncoder |
Encodes the raw bytes of a block when the index is written.
|
BlockEncoder.WritableBytes |
Writable byte buffer.
|
BlockGroupingCollector |
BlockGroupingCollector performs grouping with a
single pass collector, as long as you are grouping by a
doc block field, ie all documents sharing a given group
value were indexed as a doc block using the atomic
IndexWriter.addDocuments()
or IndexWriter.updateDocuments()
API.
|
BlockGroupingCollector.OneGroup |
|
BlockGroupingCollector.ScoreAndDoc |
|
BlockHeader |
Block header containing block metadata.
|
BlockHeader.Serializer |
Reads/writes block header.
|
BlockIntervalsSource |
|
BlockIntervalsSource.BlockIntervalIterator |
|
BlockJoinSelector |
Select a value from a block of documents.
|
BlockJoinSelector.Type |
Type of selection to perform.
|
BlockLine |
One term block line.
|
BlockLine.Serializer |
Reads/writes block lines with terms encoded incrementally inside a block.
|
BlockMaxConjunctionScorer |
Scorer for conjunctions that checks the maximum scores of each clause in
order to potentially skip over blocks that can't have competitive matches.
|
BlockMaxDISI |
DocIdSetIterator that skips non-competitive docs by checking
the max score of the provided Scorer for the current block.
|
BlockPackedReader |
|
BlockPackedReaderIterator |
|
BlockPackedWriter |
A writer for large sequences of longs.
|
BlockReader |
Seeks the block corresponding to a given term, read the block bytes, and
scans the block terms.
|
BlockTermsReader |
Handles a terms dict, but decouples all details of
doc/freqs/positions reading to an instance of PostingsReaderBase .
|
BlockTermsReader.FieldAndTerm |
|
BlockTermState |
|
BlockTermsWriter |
Writes terms dict, block-encoding (column stride) each
term's metadata for each set of terms between two
index terms.
|
BlockTermsWriter.FieldMetaData |
|
BlockTermsWriter.TermEntry |
|
BlockTreeOrdsPostingsFormat |
|
BlockTreeTermsReader |
A block-based terms index and dictionary that assigns
terms to variable length blocks according to how they
share prefixes.
|
BlockTreeTermsWriter |
Block-based terms index and dictionary writer.
|
BlockTreeTermsWriter.PendingBlock |
|
BlockTreeTermsWriter.PendingEntry |
|
BlockTreeTermsWriter.PendingTerm |
|
BlockTreeTermsWriter.StatsWriter |
|
BlockWriter |
Writes blocks in the block file.
|
BloomFilterFactory |
Class used to create index-time FuzzySet appropriately configured for
each field.
|
BloomFilteringPostingsFormat |
A PostingsFormat useful for low doc-frequency fields such as primary
keys.
|
BloomFilteringPostingsFormat.BloomFilteredFieldsProducer |
|
BloomFilteringPostingsFormat.BloomFilteredFieldsProducer.BloomFilteredTerms |
|
BloomFilteringPostingsFormat.BloomFilteredFieldsProducer.BloomFilteredTermsEnum |
|
BM25FQuery |
A Query that treats multiple fields as a single stream and scores
terms as if you had indexed them as a single term in a single field.
|
BM25FQuery.BM25FScorer |
|
BM25FQuery.Builder |
|
BM25FQuery.FieldAndWeight |
|
BM25FQuery.WeightedDisiWrapper |
|
BM25NBClassifier |
A classifier approximating naive bayes classifier by using pure queries on BM25.
|
BM25Similarity |
BM25 Similarity.
|
BM25Similarity.BM25Scorer |
Collection statistics for the BM25 model.
|
BoolDocValues |
Abstract FunctionValues implementation which supports retrieving boolean values.
|
Boolean2ScorerSupplier |
|
BooleanClause |
A clause in a BooleanQuery.
|
BooleanClause.Occur |
Specifies how clauses are to occur in matching documents.
|
BooleanModifierNode |
|
BooleanModifiersQueryNodeProcessor |
|
BooleanPerceptronClassifier |
A perceptron (see http://en.wikipedia.org/wiki/Perceptron ) based
Boolean Classifier .
|
BooleanQuery |
A Query that matches documents matching boolean combinations of other
queries, e.g.
|
BooleanQuery.Builder |
A builder for boolean queries.
|
BooleanQuery.TooManyClauses |
|
BooleanQuery2ModifierNodeProcessor |
|
BooleanQueryBuilder |
|
BooleanQueryNode |
A BooleanQueryNode represents a list of elements which do not have an
explicit boolean operator defined between them.
|
BooleanQueryNodeBuilder |
|
BooleanScorer |
|
BooleanScorer.Bucket |
|
BooleanScorer.HeadPriorityQueue |
|
BooleanScorer.TailPriorityQueue |
|
BooleanSimilarity |
Simple similarity that gives terms a score that is equal to their query
boost.
|
BooleanSimilarity.BooleanWeight |
|
BooleanSingleChildOptimizationQueryNodeProcessor |
This processor removes every BooleanQueryNode that contains only one
child and returns this child.
|
BooleanWeight |
Expert: the Weight for BooleanQuery, used to
normalize, score and explain these queries.
|
BooleanWeight.WeightedBooleanClause |
|
BoolFunction |
Abstract parent class for those ValueSource implementations which
apply boolean logic to their values
|
BoostAttribute |
|
BoostAttributeImpl |
|
BoostingTermBuilder |
|
BoostQuery |
A Query wrapper that allows to give a boost to the wrapped query.
|
BoostQueryNode |
|
BoostQueryNodeBuilder |
|
BoostQueryNodeProcessor |
|
BoundaryScanner |
|
BrazilianAnalyzer |
Analyzer for Brazilian Portuguese language.
|
BrazilianAnalyzer.DefaultSetHolder |
|
BrazilianStemFilter |
|
BrazilianStemFilterFactory |
|
BrazilianStemmer |
A stemmer for Brazilian Portuguese words.
|
BreakIteratorBoundaryScanner |
A BoundaryScanner implementation that uses BreakIterator to find
boundaries in the text.
|
BreakIteratorWrapper |
Wraps RuleBasedBreakIterator, making object reuse convenient and
emitting a rule status for emoji sequences.
|
BufferedChecksum |
Wraps another Checksum with an internal buffer
to speed up checksum calculations.
|
BufferedChecksumIndexInput |
|
BufferedIndexInput |
Base implementation class for buffered IndexInput .
|
BufferedIndexInput.SlicedIndexInput |
Implementation of an IndexInput that reads from a portion of a file.
|
BufferedInputIterator |
This wrapper buffers incoming elements.
|
BufferedUpdates |
Holds buffered deletes and updates, by docID, term or query for a
single segment.
|
BufferedUpdatesStream |
|
BufferedUpdatesStream.ApplyDeletesResult |
|
BufferedUpdatesStream.FinishedSegments |
Tracks the contiguous range of packets that have finished resolving.
|
BufferedUpdatesStream.SegmentState |
Holds all per-segment internal state used while resolving deletions.
|
Builder<T> |
Builds a minimal FST (maps an IntsRef term to an arbitrary
output) from pre-sorted terms with outputs.
|
Builder.Arc<T> |
Expert: holds a pending (seen but not yet serialized) arc.
|
Builder.CompiledNode |
|
Builder.FixedLengthArcsBuffer |
Reusable buffer for building nodes with fixed length arcs (binary search or direct addressing).
|
Builder.Node |
|
Builder.UnCompiledNode<T> |
Expert: holds a pending (seen but not yet serialized) Node.
|
BulgarianAnalyzer |
|
BulgarianAnalyzer.DefaultSetHolder |
Atomically loads the DEFAULT_STOP_SET in a lazy fashion once the outer
class accesses the static final set the first time.;
|
BulgarianStemFilter |
|
BulgarianStemFilterFactory |
|
BulgarianStemmer |
Light Stemmer for Bulgarian.
|
BulkOperation |
Efficient sequential read/write of packed integers.
|
BulkOperationPacked |
|
BulkOperationPacked1 |
Efficient sequential read/write of packed integers.
|
BulkOperationPacked10 |
Efficient sequential read/write of packed integers.
|
BulkOperationPacked11 |
Efficient sequential read/write of packed integers.
|
BulkOperationPacked12 |
Efficient sequential read/write of packed integers.
|
BulkOperationPacked13 |
Efficient sequential read/write of packed integers.
|
BulkOperationPacked14 |
Efficient sequential read/write of packed integers.
|
BulkOperationPacked15 |
Efficient sequential read/write of packed integers.
|
BulkOperationPacked16 |
Efficient sequential read/write of packed integers.
|
BulkOperationPacked17 |
Efficient sequential read/write of packed integers.
|
BulkOperationPacked18 |
Efficient sequential read/write of packed integers.
|
BulkOperationPacked19 |
Efficient sequential read/write of packed integers.
|
BulkOperationPacked2 |
Efficient sequential read/write of packed integers.
|
BulkOperationPacked20 |
Efficient sequential read/write of packed integers.
|
BulkOperationPacked21 |
Efficient sequential read/write of packed integers.
|
BulkOperationPacked22 |
Efficient sequential read/write of packed integers.
|
BulkOperationPacked23 |
Efficient sequential read/write of packed integers.
|
BulkOperationPacked24 |
Efficient sequential read/write of packed integers.
|
BulkOperationPacked3 |
Efficient sequential read/write of packed integers.
|
BulkOperationPacked4 |
Efficient sequential read/write of packed integers.
|
BulkOperationPacked5 |
Efficient sequential read/write of packed integers.
|
BulkOperationPacked6 |
Efficient sequential read/write of packed integers.
|
BulkOperationPacked7 |
Efficient sequential read/write of packed integers.
|
BulkOperationPacked8 |
Efficient sequential read/write of packed integers.
|
BulkOperationPacked9 |
Efficient sequential read/write of packed integers.
|
BulkOperationPackedSingleBlock |
|
BulkScorer |
|
ByteArrayDataInput |
DataInput backed by a byte array.
|
ByteArrayDataOutput |
DataOutput backed by a byte array.
|
ByteBlockPool |
Class that Posting and PostingVector use to write byte
streams into shared fixed-size byte[] arrays.
|
ByteBlockPool.Allocator |
Abstract class for allocating and freeing byte
blocks.
|
ByteBlockPool.DirectAllocator |
|
ByteBlockPool.DirectTrackingAllocator |
|
ByteBufferGuard |
A guard that is created for every ByteBufferIndexInput that tries on best effort
to reject any access to the ByteBuffer behind, once it is unmapped.
|
ByteBufferGuard.BufferCleaner |
Pass in an implementation of this interface to cleanup ByteBuffers.
|
ByteBufferIndexInput |
Base IndexInput implementation that uses an array
of ByteBuffers to represent a file.
|
ByteBufferIndexInput.MultiBufferImpl |
This class adds offset support to ByteBufferIndexInput, which is needed for slices.
|
ByteBufferIndexInput.SingleBufferImpl |
Optimization of ByteBufferIndexInput for when there is only one buffer
|
ByteBuffersDataInput |
|
ByteBuffersDataOutput |
A DataOutput storing data in a list of ByteBuffer s.
|
ByteBuffersDataOutput.ByteBufferRecycler |
An implementation of a ByteBuffer allocation and recycling policy.
|
ByteBuffersDirectory |
A ByteBuffer -based Directory implementation that
can be used to store index files on the heap.
|
ByteBuffersIndexInput |
|
ByteBuffersIndexOutput |
|
ByteRunAutomaton |
Automaton representation for matching UTF-8 byte[].
|
ByteSequenceOutputs |
An FST Outputs implementation where each output
is a sequence of bytes.
|
ByteSliceReader |
|
ByteSliceWriter |
Class to write byte streams into slices of shared
byte[].
|
BytesRef |
Represents byte[], as a slice (offset + length) into an
existing byte[].
|
BytesRefArray |
A simple append only random-access BytesRef array that stores full
copies of the appended bytes in a ByteBlockPool .
|
BytesRefArray.IndexedBytesRefIterator |
An extension of BytesRefIterator that allows retrieving the index of the current element
|
BytesRefArray.SortState |
Used to iterate the elements of an array in a given order.
|
BytesRefBuilder |
|
BytesRefComparator |
|
BytesRefFieldSource |
An implementation for retrieving FunctionValues instances for string based fields.
|
BytesRefFSTEnum<T> |
Enumerates all input (BytesRef) + output pairs in an
FST.
|
BytesRefFSTEnum.InputOutput<T> |
Holds a single input (BytesRef) + output pair.
|
BytesRefHash |
|
BytesRefHash.BytesStartArray |
Manages allocation of the per-term addresses.
|
BytesRefHash.DirectBytesStartArray |
|
BytesRefHash.MaxBytesLengthExceededException |
|
BytesRefIterator |
A simple iterator interface for BytesRef iteration.
|
BytesRefSorter |
Collects BytesRef and then allows one to iterate over their sorted order.
|
BytesStore |
|
BytesTermAttribute |
This attribute can be used if you have the raw term bytes to be indexed.
|
BytesTermAttributeImpl |
|
ByteVector |
This class implements a simple byte vector with access to the underlying
array.
|
CachingCollector |
Caches all docs, and optionally also scores, coming from
a search, and is then able to replay them to another
collector.
|
CachingCollector.CachedScorable |
|
CachingCollector.NoScoreCachingCollector |
|
CachingCollector.ScoreCachingCollector |
|
CachingMatchesIterator |
|
CachingNaiveBayesClassifier |
A simplistic Lucene based NaiveBayes classifier, with caching feature, see
http://en.wikipedia.org/wiki/Naive_Bayes_classifier
|
CachingTokenFilter |
This class can be used if the token attributes of a TokenStream
are intended to be consumed more than once.
|
CandidateMatcher<T extends QueryMatch> |
Class used to match candidate queries selected by a Presearcher from a Monitor
query index.
|
CandidateMatcher.MatchHolder<T> |
|
CapitalizationFilter |
A filter to apply normal capitalization rules to Tokens.
|
CapitalizationFilterFactory |
|
CatalanAnalyzer |
|
CatalanAnalyzer.DefaultSetHolder |
Atomically loads the DEFAULT_STOP_SET in a lazy fashion once the outer class
accesses the static final set the first time.;
|
CatalanStemmer |
This class was automatically generated by a Snowball to Java compiler
It implements the stemming algorithm defined by a snowball script.
|
Cell |
A Cell is a portion of a trie.
|
CharacterDefinition |
Character category data.
|
CharacterDefinition |
Character category data.
|
CharacterDefinition.CharacterClass |
|
CharacterDefinition.CharacterClass |
|
CharacterDefinition.SingletonHolder |
|
CharacterDefinition.SingletonHolder |
|
CharacterDefinitionWriter |
|
CharacterDefinitionWriter |
|
CharacterRunAutomaton |
Automaton representation for matching char[].
|
CharacterUtils |
Utility class to write tokenizers or token filters.
|
CharacterUtils.CharacterBuffer |
|
CharArrayIterator |
Wraps a char[] as CharacterIterator for processing with a BreakIterator
|
CharArrayIterator |
A CharacterIterator used internally for use with BreakIterator
|
CharArrayMap<V> |
A simple class that stores key Strings as char[]'s in a
hash table.
|
CharArrayMap.EmptyCharArrayMap<V> |
|
CharArrayMap.UnmodifiableCharArrayMap<V> |
|
CharArrayMatcher |
Matches a character array
|
CharArraySet |
A simple class that stores Strings as char[]'s in a
hash table.
|
CharFilter |
Subclasses of CharFilter can be chained to filter a Reader
They can be used as Reader with additional offset
correction.
|
CharFilterFactory |
Abstract parent class for analysis factories that create CharFilter
instances.
|
CharSequenceOutputs |
An FST Outputs implementation where each output
is a sequence of characters.
|
CharsRef |
Represents char[], as a slice (offset + length) into an existing char[].
|
CharsRef.UTF16SortedAsUTF8Comparator |
Deprecated.
|
CharsRefBuilder |
|
CharStream |
This interface describes a character stream that maintains line and
column number positions of the characters.
|
CharStream |
This interface describes a character stream that maintains line and
column number positions of the characters.
|
CharStream |
This interface describes a character stream that maintains line and
column number positions of the characters.
|
CharTermAttribute |
The term text of a Token.
|
CharTermAttributeImpl |
|
CharTokenizer |
An abstract base class for simple, character-oriented tokenizers.
|
CharType |
Internal SmartChineseAnalyzer character type constants.
|
CharVector |
This class implements a simple char vector with access to the underlying
array.
|
CheckIndex |
Basic tool and API to check the health of an index and
write a new segments file that removes reference to
problematic segments.
|
CheckIndex.ConstantRelationIntersectVisitor |
|
CheckIndex.DocValuesIteratorSupplier |
|
CheckIndex.Options |
Run-time configuration options for CheckIndex commands.
|
CheckIndex.Status |
|
CheckIndex.Status.DocValuesStatus |
Status from testing DocValues
|
CheckIndex.Status.FieldInfoStatus |
Status from testing field infos.
|
CheckIndex.Status.FieldNormStatus |
Status from testing field norms.
|
CheckIndex.Status.IndexSortStatus |
Status from testing index sort
|
CheckIndex.Status.LiveDocStatus |
Status from testing livedocs
|
CheckIndex.Status.PointsStatus |
Status from testing PointValues
|
CheckIndex.Status.SegmentInfoStatus |
Holds the status of each segment in the index.
|
CheckIndex.Status.StoredFieldStatus |
Status from testing stored fields.
|
CheckIndex.Status.TermIndexStatus |
Status from testing term index.
|
CheckIndex.Status.TermVectorStatus |
Status from testing stored fields.
|
CheckIndex.VerifyPointsVisitor |
Walks the entire N-dimensional points space, verifying that all points fall within the last cell's boundaries.
|
CheckJoinIndex |
Utility class to check a block join index.
|
ChecksumIndexInput |
Extension of IndexInput, computing checksum as it goes.
|
Circle |
Represents a circle on the earth's surface.
|
Circle2D |
2D circle implementation containing spatial logic.
|
Circle2D.CartesianDistance |
|
Circle2D.DistanceCalculator |
|
Circle2D.HaversinDistance |
|
CJKAnalyzer |
|
CJKAnalyzer.DefaultSetHolder |
|
CJKBigramFilter |
Forms bigrams of CJK terms that are generated from StandardTokenizer
or ICUTokenizer.
|
CJKBigramFilterFactory |
|
CJKWidthFilter |
A TokenFilter that normalizes CJK width differences:
Folds fullwidth ASCII variants into the equivalent basic latin
Folds halfwidth Katakana variants into the equivalent kana
|
CJKWidthFilterFactory |
|
ClassicAnalyzer |
|
ClassicFilter |
|
ClassicFilterFactory |
|
ClassicSimilarity |
Expert: Historical scoring implementation.
|
ClassicTokenizer |
A grammar-based tokenizer constructed with JFlex
|
ClassicTokenizerFactory |
|
ClassicTokenizerImpl |
This class implements the classic lucene StandardTokenizer up until 3.0
|
ClassificationResult<T> |
|
Classifier<T> |
A classifier, see http://en.wikipedia.org/wiki/Classifier_(mathematics) , which assign classes of type
T
|
ClasspathResourceLoader |
Simple ResourceLoader that uses ClassLoader.getResourceAsStream(String)
and Class.forName(String,boolean,ClassLoader) to open resources and
classes, respectively.
|
CloseableThreadLocal<T> |
Java's builtin ThreadLocal has a serious flaw:
it can take an arbitrarily long amount of time to
dereference the things you had stored in it, even once the
ThreadLocal instance itself is no longer referenced.
|
Codec |
Encodes/decodes an inverted index segment.
|
Codec.Holder |
This static holder class prevents classloading deadlock by delaying
init of default codecs and available codecs until needed.
|
CodecReader |
LeafReader implemented by codec APIs.
|
CodecUtil |
Utility class for reading and writing versioned headers.
|
CodepointCountFilter |
Removes words that are too long or too short from the stream.
|
CodepointCountFilterFactory |
|
CollatedTermAttributeImpl |
Extension of CharTermAttributeImpl that encodes the term
text as a binary Unicode collation key instead of as UTF-8 bytes.
|
CollationAttributeFactory |
Converts each token into its CollationKey , and then
encodes the bytes as an index term.
|
CollationDocValuesField |
|
CollationKeyAnalyzer |
|
CollectedSearchGroup<T> |
|
CollectingMatcher<T extends QueryMatch> |
|
CollectionStatistics |
Contains statistics for a collection (field).
|
CollectionTerminatedException |
|
CollectionUtil |
Methods for manipulating (sorting) collections.
|
CollectionUtil.ListIntroSorter<T> |
|
CollectionUtil.ListTimSorter<T> |
|
Collector |
Expert: Collectors are primarily meant to be used to
gather raw results from a search, and implement sorting
or custom result filtering, collation, etc.
|
CollectorManager<C extends Collector,T> |
A manager of collectors.
|
CollectorMemoryTracker |
Default implementation of MemoryTracker that tracks
allocations and allows setting a memory limit per collector
|
CombineSuggestion |
A suggestion generated by combining one or more original query terms
|
CommandLineUtil |
Class containing some useful methods used by command line tools
|
CommonGramsFilter |
Construct bigrams for frequently occurring terms while indexing.
|
CommonGramsFilterFactory |
|
CommonGramsQueryFilter |
Wrap a CommonGramsFilter optimizing phrase queries by only returning single
words when they are not a member of a bigram.
|
CommonGramsQueryFilterFactory |
|
CommonQueryParserConfiguration |
Configuration options common across queryparser implementations.
|
CommonTermsQuery |
A query that executes high-frequency terms in a optional sub-query to prevent
slow queries due to "common" terms like stopwords.
|
ComparisonBoolFunction |
Base class for comparison operators useful within an "if"/conditional.
|
CompetitiveImpactAccumulator |
This class accumulates the (freq, norm) pairs that may produce competitive scores.
|
Compile |
The Compile class is used to compile a stemmer table.
|
CompiledAutomaton |
Immutable class holding compiled details for a given
Automaton.
|
CompiledAutomaton.AUTOMATON_TYPE |
Automata are compiled into different internal forms for the
most efficient execution depending upon the language they accept.
|
Completion50PostingsFormat |
|
Completion84PostingsFormat |
|
CompletionAnalyzer |
Wraps an Analyzer
to provide additional completion-only tuning
(e.g.
|
CompletionFieldsConsumer |
|
CompletionFieldsConsumer.CompletionMetaData |
|
CompletionFieldsConsumer.CompletionTermWriter |
|
CompletionFieldsProducer |
Completion index (.cmp) is opened and read at instantiation to read in SuggestField
numbers and their FST offsets in the Completion dictionary (.lkp).
|
CompletionPostingsFormat |
|
CompletionPostingsFormat.FSTLoadMode |
An enum that allows to control if suggester FSTs are loaded into memory or read off-heap
|
CompletionQuery |
Abstract Query that match documents containing terms with a specified prefix
filtered by BitsProducer .
|
CompletionScorer |
Expert: Responsible for executing the query against an
appropriate suggester and collecting the results
via a collector.
|
CompletionsTermsReader |
Holder for suggester and field-level info
for a suggest field
|
CompletionTerms |
|
CompletionTokenStream |
|
CompletionWeight |
Expert: the Weight for CompletionQuery, used to
score and explain these queries.
|
ComplexPhraseQueryParser |
QueryParser which permits complex phrase query syntax eg "(john jon
jonathan~) peters*".
|
ComplexPhraseQueryParser.ComplexPhraseQuery |
|
Component2D |
2D Geometry object that supports spatial relationships with bounding boxes,
triangles and points.
|
Component2D.WithinRelation |
Used by withinTriangle to check the within relationship between a triangle and the query shape
(e.g.
|
ComponentTree |
2D multi-component geometry implementation represented as an interval tree of components.
|
ComposedQuery |
Base class for composite queries (such as AND/OR/NOT)
|
CompositeBreakIterator |
An internal BreakIterator for multilingual text, following recommendations
from: UAX #29: Unicode Text Segmentation.
|
CompositeReader |
Instances of this reader type can only
be used to get stored fields from the underlying LeafReaders,
but it is not possible to directly retrieve postings.
|
CompositeReaderContext |
|
CompositeReaderContext.Builder |
|
CompoundDirectory |
A read-only Directory that consists of a view over a compound file.
|
CompoundFormat |
Encodes/decodes compound files
|
CompoundWordTokenFilterBase |
Base class for decomposition token filters.
|
CompressingStoredFieldsFormat |
A StoredFieldsFormat that compresses documents in chunks in
order to improve the compression ratio.
|
CompressingStoredFieldsReader |
|
CompressingStoredFieldsReader.SerializedDocument |
A serialized document, you need to decode its input in order to get an actual
Document .
|
CompressingStoredFieldsWriter |
|
CompressingStoredFieldsWriter.CompressingStoredFieldsMergeSub |
|
CompressingTermVectorsFormat |
A TermVectorsFormat that compresses chunks of documents together in
order to improve the compression ratio.
|
CompressingTermVectorsReader |
|
CompressingTermVectorsReader.TVPostingsEnum |
|
CompressingTermVectorsReader.TVTerms |
|
CompressingTermVectorsReader.TVTermsEnum |
|
CompressingTermVectorsWriter |
|
CompressionAlgorithm |
Compression algorithm used for suffixes of a block of terms.
|
CompressionMode |
A compression mode.
|
CompressionMode.DeflateCompressor |
|
CompressionMode.DeflateDecompressor |
|
CompressionMode.LZ4FastCompressor |
|
CompressionMode.LZ4HighCompressor |
|
Compressor |
A data compressor.
|
ConcatenateGraphFilter |
Concatenates/Joins every incoming token with a separator into one output token for every path through the
token stream (which is a graph).
|
ConcatenateGraphFilter.BytesRefBuilderTermAttribute |
Attribute providing access to the term builder and UTF-16 conversion
|
ConcatenateGraphFilter.BytesRefBuilderTermAttributeImpl |
|
ConcatenateGraphFilter.EscapingTokenStreamToAutomaton |
|
ConcatenateGraphFilterFactory |
|
ConcatenatingTokenStream |
A TokenStream that takes an array of input TokenStreams as sources, and
concatenates them together.
|
ConcurrentMergeScheduler |
|
ConcurrentQueryLoader |
Utility class for concurrently loading queries into a Monitor.
|
ConditionalTokenFilter |
Allows skipping TokenFilters based on the current set of attributes.
|
ConditionalTokenFilter.TokenState |
|
ConditionalTokenFilterFactory |
|
ConfigurationKey<T> |
An instance of this class represents a key that is used to retrieve a value
from AbstractQueryConfig .
|
ConfusionMatrixGenerator |
Utility class to generate the confusion matrix of a Classifier
|
ConfusionMatrixGenerator.ConfusionMatrix |
a confusion matrix, backed by a Map representing the linearized matrix
|
ConjunctionDISI |
A conjunction of DocIdSetIterators.
|
ConjunctionDISI |
A conjunction of DocIdSetIterators.
|
ConjunctionDISI.BitSetConjunctionDISI |
|
ConjunctionDISI.ConjunctionTwoPhaseIterator |
|
ConjunctionIntervalIterator |
|
ConjunctionIntervalsSource |
|
ConjunctionIntervalsSource.ConjunctionMatchesIterator |
|
ConjunctionIntervalsSource.SingletonMatchesIterator |
|
ConjunctionScorer |
Scorer for conjunctions, sets of queries, all of which are required.
|
ConjunctionScorer.DocsAndFreqs |
|
ConjunctionSpans |
Common super class for multiple sub spans required in a document.
|
ConnectionCosts |
n-gram connection cost data
|
ConnectionCosts |
n-gram connection cost data
|
ConnectionCosts.SingletonHolder |
|
ConnectionCosts.SingletonHolder |
|
ConnectionCostsBuilder |
|
ConnectionCostsBuilder |
|
ConnectionCostsWriter |
|
ConnectionCostsWriter |
|
Constants |
Some useful constants.
|
ConstantScoreQuery |
A query that wraps another query and simply returns a constant score equal to
1 for every document that matches the query.
|
ConstantScoreQuery.ConstantBulkScorer |
We return this as our BulkScorer so that if the CSQ
wraps a query with its own optimized top-level
scorer (e.g.
|
ConstantScoreQueryBuilder |
|
ConstantScoreScorer |
|
ConstantScoreWeight |
A Weight that has a constant score equal to the boost of the wrapped query.
|
ConstNumberSource |
ConstNumberSource is the base class for all constant numbers
|
ConstValueSource |
ConstValueSource returns a constant for all documents
|
ContainedByIntervalsSource |
|
ContainingIntervalsSource |
|
ContainSpans |
|
ContextQuery |
|
ContextQuery.ContextCompletionWeight |
|
ContextQuery.ContextMetaData |
Holder for context value meta data
|
ContextSuggestField |
|
ContextSuggestField.PrefixTokenFilter |
|
ControlledRealTimeReopenThread<T> |
Utility class that runs a thread to manage periodicc
reopens of a ReferenceManager , with methods to wait for a specific
index changes to become visible.
|
CoreParser |
Assembles a QueryBuilder which uses only core Lucene Query objects
|
CorePlusExtensionsParser |
Assembles a QueryBuilder which uses Query objects from
Lucene's sandbox and queries
modules in addition to core queries.
|
CorePlusQueriesParser |
Assembles a QueryBuilder which uses Query objects from
Lucene's queries module in addition to core queries.
|
CorruptIndexException |
This exception is thrown when Lucene detects
an inconsistency in the index.
|
Counter |
Simple counter class
|
Counter.AtomicCounter |
|
Counter.SerialCounter |
|
CoveringQuery |
A Query that allows to have a configurable number or required
matches per document.
|
CoveringQuery.CoveringWeight |
|
CoveringScorer |
A Scorer whose number of matches is per-document.
|
CSVUtil |
Utility class for parsing CSV text
|
CSVUtil |
Utility class for parsing CSV text
|
CustomAnalyzer |
A general-purpose Analyzer that can be created with a builder-style API.
|
CustomAnalyzer.Builder |
|
CustomAnalyzer.ConditionBuilder |
|
CustomQueryHandler |
Builds a QueryTree for a query that needs custom treatment
The default query analyzers will use the QueryVisitor API to extract
terms from queries.
|
CustomSeparatorBreakIterator |
A BreakIterator that breaks the text whenever a certain separator, provided as a constructor argument, is found.
|
CzechAnalyzer |
|
CzechAnalyzer.DefaultSetHolder |
|
CzechStemFilter |
|
CzechStemFilterFactory |
|
CzechStemmer |
Light Stemmer for Czech.
|
DaciukMihovAutomatonBuilder |
Builds a minimal, deterministic Automaton that accepts a set of
strings.
|
DaciukMihovAutomatonBuilder.State |
DFSA state with char labels on transitions.
|
DaitchMokotoffSoundexFilter |
Create tokens for phonetic matches based on Daitch–Mokotoff Soundex.
|
DaitchMokotoffSoundexFilterFactory |
|
DanishAnalyzer |
|
DanishAnalyzer.DefaultSetHolder |
Atomically loads the DEFAULT_STOP_SET in a lazy fashion once the outer class
accesses the static final set the first time.;
|
DanishStemmer |
This class was automatically generated by a Snowball to Java compiler
It implements the stemming algorithm defined by a snowball script.
|
DataInput |
Abstract base class for performing read operations of Lucene's low-level
data types.
|
DataOutput |
Abstract base class for performing write operations of Lucene's low-level
data types.
|
DatasetSplitter |
Utility class for creating training / test / cross validation indexes from the original index.
|
DateRecognizerFilter |
Filters all tokens that cannot be parsed to a date, using the provided DateFormat .
|
DateRecognizerFilterFactory |
|
DateTools |
Provides support for converting dates to strings and vice-versa.
|
DateTools.Resolution |
Specifies the time granularity.
|
DecimalDigitFilter |
Folds all Unicode digits in [:General_Category=Decimal_Number:]
to Basic Latin digits (0-9 ).
|
DecimalDigitFilterFactory |
|
DecompoundToken |
A token that was generated from a compound.
|
Decompressor |
A decompressor.
|
DefaultBloomFilterFactory |
Default policy is to allocate a bitset with 10% saturation given a unique term per document.
|
DefaultEncoder |
Simple Encoder implementation that does not modify the output
|
DefaultICUTokenizerConfig |
|
DefaultIndexingChain |
Default general purpose indexing chain, which handles
indexing all types of fields.
|
DefaultPassageFormatter |
Creates a formatted snippet from the top passages.
|
DefaultPhraseSlopQueryNodeProcessor |
|
DefFunction |
ValueSource implementation which only returns the values from the provided
ValueSources which are available for a particular docId.
|
DelegatingAnalyzerWrapper |
An analyzer wrapper, that doesn't allow to wrap components or readers.
|
DelegatingAnalyzerWrapper.DelegatingReuseStrategy |
|
DeletedQueryNode |
|
DelimitedBoostTokenFilter |
Characters before the delimiter are the "token", those after are the boost.
|
DelimitedBoostTokenFilterFactory |
|
DelimitedPayloadTokenFilter |
Characters before the delimiter are the "token", those after are the payload.
|
DelimitedPayloadTokenFilterFactory |
|
DelimitedTermFrequencyTokenFilter |
Characters before the delimiter are the "token", the textual integer after is the term frequency.
|
DelimitedTermFrequencyTokenFilterFactory |
|
DeltaBaseTermStateSerializer |
TermState serializer which encodes each file pointer as a delta relative
to a base file pointer.
|
DeltaPackedLongValues |
|
DeltaPackedLongValues.Builder |
|
DFISimilarity |
Implements the Divergence from Independence (DFI) model based on Chi-square statistics
(i.e., standardized Chi-squared distance from independence in term frequency tf).
|
DFRSimilarity |
Implements the divergence from randomness (DFR) framework
introduced in Gianni Amati and Cornelis Joost Van Rijsbergen.
|
Dictionary |
In-memory structure for the dictionary (.dic) and affix (.aff)
data of a hunspell dictionary.
|
Dictionary |
Dictionary interface for retrieving morphological data
by id.
|
Dictionary |
Dictionary interface for retrieving morphological data
by id.
|
Dictionary |
A simple interface representing a Dictionary.
|
Dictionary.DoubleASCIIFlagParsingStrategy |
Implementation of Dictionary.FlagParsingStrategy that assumes each flag is encoded as two ASCII characters whose codes
must be combined into a single character.
|
Dictionary.FlagParsingStrategy |
Abstraction of the process of parsing flags taken from the affix and dic files
|
Dictionary.Morpheme |
A morpheme extracted from a compound token.
|
Dictionary.NumFlagParsingStrategy |
|
Dictionary.SimpleFlagParsingStrategy |
|
DictionaryBuilder |
Tool to build dictionaries.
|
DictionaryBuilder |
Tool to build dictionaries.
|
DictionaryBuilder.DictionaryFormat |
Format of the dictionary.
|
DictionaryCompoundWordTokenFilter |
A TokenFilter that decomposes compound words found in many Germanic languages.
|
DictionaryCompoundWordTokenFilterFactory |
|
DictionaryToken |
|
Diff |
The Diff object generates a patch string.
|
DifferenceIntervalsSource |
|
DiffIt |
The DiffIt class is a means generate patch commands from an already prepared
stemmer table.
|
Direct16 |
Direct wrapping of 16-bits values to a backing array.
|
Direct32 |
Direct wrapping of 32-bits values to a backing array.
|
Direct64 |
Direct wrapping of 64-bits values to a backing array.
|
Direct8 |
Direct wrapping of 8-bits values to a backing array.
|
DirectDocValuesConsumer |
|
DirectDocValuesFormat |
In-memory docvalues format that does no (or very little)
compression.
|
DirectDocValuesProducer |
|
DirectDocValuesProducer.BinaryEntry |
|
DirectDocValuesProducer.BinaryRawValues |
|
DirectDocValuesProducer.FSTEntry |
|
DirectDocValuesProducer.NumericEntry |
|
DirectDocValuesProducer.NumericRawValues |
|
DirectDocValuesProducer.SortedEntry |
|
DirectDocValuesProducer.SortedNumericEntry |
|
DirectDocValuesProducer.SortedNumericRawValues |
|
DirectDocValuesProducer.SortedRawValues |
|
DirectDocValuesProducer.SortedSetEntry |
|
DirectDocValuesProducer.SortedSetRawValues |
|
DirectMonotonicReader |
|
DirectMonotonicReader.Meta |
|
DirectMonotonicWriter |
Write monotonically-increasing sequences of integers.
|
Directory |
A Directory provides an abstraction layer for storing a
list of files.
|
DirectoryReader |
|
DirectPacked64SingleBlockReader |
|
DirectPackedReader |
|
DirectPostingsFormat |
Wraps Lucene84PostingsFormat format for on-disk
storage, but then at read time loads and stores all
terms and postings directly in RAM as byte[], int[].
|
DirectPostingsFormat.DirectField |
|
DirectPostingsFormat.DirectField.HighFreqTerm |
|
DirectPostingsFormat.DirectField.IntArrayWriter |
|
DirectPostingsFormat.DirectField.LowFreqTerm |
|
DirectPostingsFormat.DirectField.TermAndSkip |
|
DirectPostingsFormat.DirectFields |
|
DirectPostingsFormat.HighFreqDocsEnum |
|
DirectPostingsFormat.HighFreqPostingsEnum |
|
DirectPostingsFormat.LowFreqDocsEnum |
|
DirectPostingsFormat.LowFreqDocsEnumNoPos |
|
DirectPostingsFormat.LowFreqDocsEnumNoTF |
|
DirectPostingsFormat.LowFreqPostingsEnum |
|
DirectReader |
|
DirectReader.DirectPackedReader1 |
|
DirectReader.DirectPackedReader12 |
|
DirectReader.DirectPackedReader16 |
|
DirectReader.DirectPackedReader2 |
|
DirectReader.DirectPackedReader20 |
|
DirectReader.DirectPackedReader24 |
|
DirectReader.DirectPackedReader28 |
|
DirectReader.DirectPackedReader32 |
|
DirectReader.DirectPackedReader4 |
|
DirectReader.DirectPackedReader40 |
|
DirectReader.DirectPackedReader48 |
|
DirectReader.DirectPackedReader56 |
|
DirectReader.DirectPackedReader64 |
|
DirectReader.DirectPackedReader8 |
|
DirectSpellChecker |
Simple automaton-based spellchecker.
|
DirectSpellChecker.ScoreTerm |
|
DirectWriter |
Class for writing packed integers to be directly read from Directory.
|
DisiPriorityQueue |
A priority queue of DocIdSetIterators that orders by current doc ID.
|
DisiPriorityQueue |
A priority queue of DocIdSetIterators that orders by current doc ID.
|
DisiWrapper |
|
DisiWrapper |
|
DisjunctionDISIApproximation |
A DocIdSetIterator which is a disjunction of the approximations of
the provided iterators.
|
DisjunctionDISIApproximation |
A DocIdSetIterator which is a disjunction of the approximations of
the provided iterators.
|
DisjunctionIntervalsSource |
|
DisjunctionIntervalsSource.DisjunctionIntervalIterator |
|
DisjunctionIntervalsSource.DisjunctionMatchesIterator |
|
DisjunctionMatchesIterator |
A MatchesIterator that combines matches from a set of sub-iterators
Matches are sorted by their start positions, and then by their end positions, so that
prefixes sort first.
|
DisjunctionMatchesIterator.TermsEnumDisjunctionMatchesIterator |
|
DisjunctionMaxQuery |
A query that generates the union of documents produced by its subqueries, and that scores each document with the maximum
score for that document as produced by any subquery, plus a tie breaking increment for any additional matching subqueries.
|
DisjunctionMaxQueryBuilder |
|
DisjunctionMaxScorer |
The Scorer for DisjunctionMaxQuery.
|
Disjunctions |
|
DisjunctionScoreBlockBoundaryPropagator |
A helper to propagate block boundaries for disjunctions.
|
DisjunctionScorer |
Base class for Scorers that score disjunctions.
|
DisjunctionSumScorer |
A Scorer for OR like queries, counterpart of ConjunctionScorer .
|
DistanceQuery |
Factory for NEAR queries
|
DistanceRewriteQuery |
|
DistanceSubQuery |
Interface for queries that can be nested as subqueries
into a span near.
|
DistinctValuesCollector<T,R> |
A second pass grouping collector that keeps track of distinct values for a specified field for the top N group.
|
DistinctValuesCollector.DistinctValuesReducer<T,R> |
|
DistinctValuesCollector.GroupCount<T,R> |
|
DistinctValuesCollector.ValuesCollector<R> |
|
Distribution |
The probabilistic distribution used to model term occurrence
in information-based models.
|
DistributionLL |
Log-logistic distribution.
|
DistributionSPL |
The smoothed power-law (SPL) distribution for the information-based framework
that is described in the original paper.
|
DiversifiedTopDocsCollector |
A TopDocsCollector that controls diversity in results by ensuring no
more than maxHitsPerKey results from a common source are collected in the
final results.
|
DiversifiedTopDocsCollector.ScoreDocKey |
An extension to ScoreDoc that includes a key used for grouping purposes
|
DiversifiedTopDocsCollector.ScoreDocKeyQueue |
|
DivFloatFunction |
Function to divide "a" by "b"
|
DocConsumer |
|
DocFreqValueSource |
DocFreqValueSource returns the number of documents containing the term.
|
DocFreqValueSource.ConstDoubleDocValues |
|
DocFreqValueSource.ConstIntDocValues |
|
DocIDMerger<T extends DocIDMerger.Sub> |
Utility class to help merging documents from sub-readers according to either simple
concatenated (unsorted) order, or by a specified index-time sort, skipping
deleted documents and remapping non-deleted documents.
|
DocIDMerger.SequentialDocIDMerger<T extends DocIDMerger.Sub> |
|
DocIDMerger.SortedDocIDMerger<T extends DocIDMerger.Sub> |
|
DocIDMerger.Sub |
Represents one sub-reader being merged
|
DocIdSet |
A DocIdSet contains a set of doc ids.
|
DocIdSetBuilder |
|
DocIdSetBuilder.Buffer |
|
DocIdSetBuilder.BufferAdder |
|
DocIdSetBuilder.BulkAdder |
Utility class to efficiently add many docs in one go.
|
DocIdSetBuilder.FixedBitSetAdder |
|
DocIdSetIterator |
This abstract class defines methods to iterate over a set of non-decreasing
doc ids.
|
DocIdsWriter |
|
DocsWithFieldSet |
Accumulator for documents that have a value for a field.
|
DocTermsIndexDocValues |
Serves as base class for FunctionValues based on DocTermsIndex.
|
DocTermsIndexDocValues.DocTermsIndexException |
Custom Exception to be thrown when the DocTermsIndex for a field cannot be generated
|
DocToDoubleVectorUtils |
utility class for converting Lucene Document s to Double vectors.
|
Document |
Documents are the unit of indexing and search.
|
DocumentBatch |
|
DocumentBatch.MultiDocumentBatch |
|
DocumentBatch.SingletonDocumentBatch |
|
DocumentClassifier<T> |
A classifier, see http://en.wikipedia.org/wiki/Classifier_(mathematics) , which assign classes of type
T to a Document s
|
DocumentDictionary |
Dictionary with terms, weights, payload (optional) and contexts (optional)
information taken from stored/indexed fields in a Lucene index.
|
DocumentStoredFieldVisitor |
|
DocumentsWriter |
This class accepts multiple added documents and directly
writes segment files.
|
DocumentsWriter.FlushNotifications |
|
DocumentsWriterDeleteQueue |
|
DocumentsWriterDeleteQueue.DeleteSlice |
|
DocumentsWriterDeleteQueue.DocValuesUpdatesNode |
|
DocumentsWriterDeleteQueue.Node<T> |
|
DocumentsWriterDeleteQueue.QueryArrayNode |
|
DocumentsWriterDeleteQueue.TermArrayNode |
|
DocumentsWriterDeleteQueue.TermNode |
|
DocumentsWriterFlushControl |
|
DocumentsWriterFlushQueue |
|
DocumentsWriterFlushQueue.FlushTicket |
|
DocumentsWriterPerThread |
|
DocumentsWriterPerThread.FlushedSegment |
|
DocumentsWriterPerThread.IndexingChain |
|
DocumentsWriterPerThread.IntBlockAllocator |
|
DocumentsWriterPerThreadPool |
|
DocumentsWriterStallControl |
|
DocumentValueSourceDictionary |
Dictionary with terms and optionally payload and
optionally contexts information
taken from stored fields in a Lucene index.
|
DocValues |
This class contains utility methods and constants for DocValues
|
DocValuesConsumer |
Abstract API that consumes numeric, binary and
sorted docvalues.
|
DocValuesConsumer.BinaryDocValuesSub |
Tracks state of one binary sub-reader that we are merging
|
DocValuesConsumer.BitsFilteredTermsEnum |
|
DocValuesConsumer.MergedTermsEnum |
|
DocValuesConsumer.NumericDocValuesSub |
Tracks state of one numeric sub-reader that we are merging
|
DocValuesConsumer.SortedDocValuesSub |
Tracks state of one sorted sub-reader that we are merging
|
DocValuesConsumer.SortedNumericDocValuesSub |
Tracks state of one sorted numeric sub-reader that we are merging
|
DocValuesConsumer.SortedSetDocValuesSub |
Tracks state of one sorted set sub-reader that we are merging
|
DocValuesFieldExistsQuery |
A Query that matches documents that have a value for a given field
as reported by doc values iterators.
|
DocValuesFieldUpdates |
Holds updates of a single DocValues field, for a set of documents within one segment.
|
DocValuesFieldUpdates.AbstractIterator |
|
DocValuesFieldUpdates.Iterator |
An iterator over documents and their updated values.
|
DocValuesFieldUpdates.SingleValueDocValuesFieldUpdates |
|
DocValuesFormat |
Encodes/decodes per-document values.
|
DocValuesFormat.Holder |
This static holder class prevents classloading deadlock by delaying
init of doc values formats until needed.
|
DocValuesIterator |
|
DocValuesLeafReader |
|
DocValuesNumbersQuery |
|
DocValuesProducer |
Abstract API that produces numeric, binary, sorted, sortedset,
and sortednumeric docvalues.
|
DocValuesRewriteMethod |
Rewrites MultiTermQueries into a filter, using DocValues for term enumeration.
|
DocValuesRewriteMethod.MultiTermQueryDocValuesWrapper |
|
DocValuesStats<T> |
Holds statistics for a DocValues field.
|
DocValuesStats.DoubleDocValuesStats |
Holds DocValues statistics for a numeric field storing double values.
|
DocValuesStats.LongDocValuesStats |
Holds DocValues statistics for a numeric field storing long values.
|
DocValuesStats.NumericDocValuesStats<T extends java.lang.Number> |
Holds statistics for a numeric DocValues field.
|
DocValuesStats.SortedDocValuesStats |
Holds statistics for a sorted DocValues field.
|
DocValuesStats.SortedDoubleDocValuesStats |
Holds DocValues statistics for a sorted-numeric field storing double values.
|
DocValuesStats.SortedLongDocValuesStats |
Holds DocValues statistics for a sorted-numeric field storing long values.
|
DocValuesStats.SortedNumericDocValuesStats<T extends java.lang.Number> |
Holds statistics for a sorted-numeric DocValues field.
|
DocValuesStats.SortedSetDocValuesStats |
Holds statistics for a sorted-set DocValues field.
|
DocValuesStatsCollector |
A Collector which computes statistics for a DocValues field.
|
DocValuesTermsCollector<DV> |
|
DocValuesTermsCollector.Function<R> |
|
DocValuesTermsQuery |
A Query that only accepts documents whose
term value in the specified field is contained in the
provided set of allowed terms.
|
DocValuesType |
DocValues types.
|
DocValuesUpdate |
An in-place update to a DocValues field.
|
DocValuesUpdate.BinaryDocValuesUpdate |
An in-place update to a binary DocValues field
|
DocValuesUpdate.NumericDocValuesUpdate |
An in-place update to a numeric DocValues field
|
DocValuesWriter<T extends DocIdSetIterator> |
|
DOMUtils |
Helper methods for parsing XML
|
DoubleConstValueSource |
Function that returns a constant double value for every document.
|
DoubleDocValues |
Abstract FunctionValues implementation which supports retrieving double values.
|
DoubleDocValuesField |
Syntactic sugar for encoding doubles as NumericDocValues
via Double.doubleToRawLongBits(double) .
|
DoubleFieldSource |
|
DoubleMetaphoneFilter |
Filter for DoubleMetaphone (supporting secondary codes)
|
DoubleMetaphoneFilterFactory |
|
DoublePoint |
An indexed double field for fast range filters.
|
DoublePointMultiRangeBuilder |
Builder for multi range queries for DoublePoints
|
DoubleRange |
An indexed Double Range field.
|
DoubleRange |
Represents a contiguous range of double values, with an inclusive minimum and
exclusive maximum
|
DoubleRangeDocValuesField |
DocValues field for DoubleRange.
|
DoubleRangeFactory |
Groups double values into ranges
|
DoubleRangeGroupSelector |
A GroupSelector implementation that groups documents by double values
|
DoubleRangeSlowRangeQuery |
|
DoubleValues |
Per-segment, per-document double values, which can be calculated at search-time
|
DoubleValuesSource |
|
DoubleValuesSource.ConstantValuesSource |
|
DoubleValuesSource.DoubleValuesComparatorSource |
|
DoubleValuesSource.DoubleValuesHolder |
|
DoubleValuesSource.DoubleValuesSortField |
|
DoubleValuesSource.FieldValuesSource |
|
DoubleValuesSource.LongDoubleValuesSource |
|
DoubleValuesSource.QueryDoubleValuesSource |
|
DoubleValuesSource.WeightDoubleValuesSource |
|
DualFloatFunction |
Abstract ValueSource implementation which wraps two ValueSources
and applies an extendible float function to their values.
|
DummyQueryNodeBuilder |
This builder does nothing.
|
DutchAnalyzer |
|
DutchAnalyzer.DefaultSetHolder |
|
DutchStemmer |
This class was automatically generated by a Snowball to Java compiler
It implements the stemming algorithm defined by a snowball script.
|
EdgeNGramFilterFactory |
|
EdgeNGramTokenFilter |
Tokenizes the given token into n-grams of given size(s).
|
EdgeNGramTokenizer |
Tokenizes the input from an edge into n-grams of given size(s).
|
EdgeNGramTokenizerFactory |
|
EdgeTree |
Internal tree node: represents geometry edge from [x1, y1] to [x2, y2].
|
ElisionFilter |
|
ElisionFilterFactory |
|
EmptyDocValuesProducer |
|
EmptyTokenStream |
An always exhausted token stream.
|
Encoder |
Encodes original text.
|
EnglishAnalyzer |
|
EnglishMinimalStemFilter |
|
EnglishMinimalStemFilterFactory |
|
EnglishMinimalStemmer |
Minimal plural stemmer for English.
|
EnglishPossessiveFilter |
TokenFilter that removes possessives (trailing 's) from words.
|
EnglishPossessiveFilterFactory |
|
EnglishStemmer |
This class was automatically generated by a Snowball to Java compiler
It implements the stemming algorithm defined by a snowball script.
|
EnumFieldSource |
|
EscapeQuerySyntax |
A parser needs to implement EscapeQuerySyntax to allow the QueryNode
to escape the queries, when the toQueryString method is called.
|
EscapeQuerySyntax.Type |
Type of escaping: String for escaping syntax,
NORMAL for escaping reserved words (like AND) in terms
|
EscapeQuerySyntaxImpl |
|
EstonianAnalyzer |
|
EstonianAnalyzer.DefaultSetHolder |
Atomically loads the DEFAULT_STOP_SET in a lazy fashion once the outer class
accesses the static final set the first time.;
|
EstonianStemmer |
This class was automatically generated by a Snowball to Java compiler
It implements the stemming algorithm defined by a snowball script.
|
ExactPhraseMatcher |
|
ExactPhraseMatcher.PostingsAndPosition |
|
ExitableDirectoryReader |
|
ExitableDirectoryReader.ExitableFilterAtomicReader |
Wrapper class for another FilterAtomicReader.
|
ExitableDirectoryReader.ExitableIntersectVisitor |
|
ExitableDirectoryReader.ExitablePointValues |
Wrapper class for another PointValues implementation that is used by ExitableFields.
|
ExitableDirectoryReader.ExitableSubReaderWrapper |
Wrapper class for a SubReaderWrapper that is used by the ExitableDirectoryReader.
|
ExitableDirectoryReader.ExitableTerms |
Wrapper class for another Terms implementation that is used by ExitableFields.
|
ExitableDirectoryReader.ExitableTermsEnum |
Wrapper class for TermsEnum that is used by ExitableTerms for implementing an
exitable enumeration of terms.
|
ExitableDirectoryReader.ExitingReaderException |
Exception that is thrown to prematurely terminate a term enumeration.
|
ExplainingMatch |
A query match containing the score explanation of the match
|
Explanation |
Expert: Describes the score computation for document and query.
|
Expression |
Base class that computes the value of an expression for a document.
|
ExpressionFunctionValues |
|
ExpressionRescorer |
A Rescorer that uses an expression to re-score
first pass hits.
|
ExpressionValueSource |
|
ExtendableQueryParser |
The ExtendableQueryParser enables arbitrary query parser extension
based on a customizable field naming scheme.
|
ExtendedIntervalIterator |
Wraps an IntervalIterator and extends the bounds of its intervals
Useful for specifying gaps in an ordered iterator; if you want to match
`a b [2 spaces] c`, you can search for phrase(a, extended(b, 0, 2), c)
An interval with prefix bounds extended by n will skip over matches that
appear in positions lower than n
|
ExtendedIntervalsSource |
|
ExtensionQuery |
ExtensionQuery holds all query components extracted from the original
query string like the query field and the extension query string.
|
Extensions |
|
Extensions.Pair<Cur,Cud> |
This class represents a generic pair.
|
ExternalRefSorter |
Builds and iterates over sequences stored on disk.
|
ExternalRefSorter.ByteSequenceIterator |
Iterate over byte refs in a file.
|
FastCharStream |
An efficient implementation of JavaCC's CharStream interface.
|
FastCharStream |
An efficient implementation of JavaCC's CharStream interface.
|
FastCharStream |
An efficient implementation of JavaCC's CharStream interface.
|
FastVectorHighlighter |
Another highlighter implementation.
|
FeatureDoubleValuesSource |
|
FeatureDoubleValuesSource.FeatureDoubleValues |
|
FeatureField |
Field that can be used to store static scoring factors into
documents.
|
FeatureField.FeatureFunction |
|
FeatureField.FeatureTokenStream |
|
FeatureField.LogFunction |
|
FeatureField.SaturationFunction |
|
FeatureField.SigmoidFunction |
|
FeatureQuery |
|
FeatureSortField |
Sorts using the value of a specified feature name from a FeatureField .
|
Field |
Expert: directly create a field for a document.
|
Field.BinaryTokenStream |
|
Field.Store |
Specifies whether and how a field should be stored.
|
Field.StringTokenStream |
|
FieldableNode |
A query node implements FieldableNode interface to indicate that its
children and itself are associated to a specific field.
|
FieldBoostMapFCListener |
|
FieldCacheSource |
A base class for ValueSource implementations that retrieve values for
a single field from DocValues.
|
FieldComparator<T> |
Expert: a FieldComparator compares hits so as to determine their
sort order when collecting the top results with TopFieldCollector .
|
FieldComparator.DocComparator |
Sorts by ascending docID
|
FieldComparator.DoubleComparator |
|
FieldComparator.FloatComparator |
|
FieldComparator.IntComparator |
|
FieldComparator.LongComparator |
|
FieldComparator.NumericComparator<T extends java.lang.Number> |
Base FieldComparator class for numeric types
|
FieldComparator.RelevanceComparator |
Sorts by descending relevance.
|
FieldComparator.TermOrdValComparator |
Sorts by field's natural Term sort order, using
ordinals.
|
FieldComparator.TermValComparator |
Sorts by field's natural Term sort order.
|
FieldComparatorSource |
|
FieldConfig |
This class represents a field configuration.
|
FieldConfigListener |
This interface should be implemented by classes that wants to listen for
field configuration requests.
|
FieldDateResolutionFCListener |
|
FieldDoc |
Expert: A ScoreDoc which also contains information about
how to sort the referenced document.
|
FieldFragList |
FieldFragList has a list of "frag info" that is used by FragmentsBuilder class
to create fragments (snippets).
|
FieldFragList.WeightedFragInfo |
List of term offsets + weight for a frag info
|
FieldFragList.WeightedFragInfo.SubInfo |
Represents the list of term offsets for some text
|
FieldHighlighter |
Internal highlighter abstraction that operates on a per field basis.
|
FieldInfo |
Access to the Field Info file that describes document fields and whether or
not they are indexed.
|
FieldInfos |
Collection of FieldInfo s (accessible by number or by name).
|
FieldInfos.Builder |
|
FieldInfos.FieldDimensions |
|
FieldInfos.FieldNumbers |
|
FieldInfosFormat |
|
FieldInvertState |
This class tracks the number and position / offset parameters of terms
being added to the index.
|
FieldMaskingSpanQuery |
Wrapper to allow SpanQuery objects participate in composite
single-field SpanQueries by 'lying' about their search field.
|
FieldMetadata |
Metadata and stats for one field in the index.
|
FieldMetadata.Serializer |
Reads/writes field metadata.
|
FieldMetadataTermState |
|
FieldOffsetStrategy |
Ultimately returns an OffsetsEnum yielding potentially highlightable words in the text.
|
FieldPhraseList |
FieldPhraseList has a list of WeightedPhraseInfo that is used by FragListBuilder
to create a FieldFragList object.
|
FieldPhraseList.WeightedPhraseInfo |
Represents the list of term offsets and boost for some text
|
FieldPhraseList.WeightedPhraseInfo.Toffs |
Term offsets (start + end)
|
FieldQuery |
FieldQuery breaks down query object into terms/phrases and keeps
them in a QueryPhraseMap structure.
|
FieldQuery.QueryPhraseMap |
Internal structure of a query for highlighting: represents
a nested query structure
|
FieldQueryNode |
|
FieldQueryNodeBuilder |
|
FieldReader |
BlockTree's implementation of Terms .
|
Fields |
Provides a Terms index for fields that have it, and lists which fields do.
|
FieldsConsumer |
Abstract API that consumes terms, doc, freq, prox, offset and
payloads postings.
|
FieldsIndex |
|
FieldsIndexReader |
|
FieldsIndexWriter |
Efficient index format for block-based Codec s.
|
FieldsProducer |
Abstract API that produces terms, doc, freq, prox, offset and
payloads postings.
|
FieldsQuery |
Forms an OR query of the provided query across multiple fields.
|
FieldTermIterator |
Iterates over terms in across multiple fields.
|
FieldTermStack |
FieldTermStack is a stack that keeps query terms in the specified field
of the document to be highlighted.
|
FieldTermStack.TermInfo |
Single term with its position/offsets in the document and IDF weight.
|
FieldType |
Describes the properties of a field.
|
FieldUpdatesBuffer |
This class efficiently buffers numeric and binary field updates and stores
terms, values and metadata in a memory efficient way without creating large amounts
of objects.
|
FieldUpdatesBuffer.BufferedUpdate |
Struct like class that is used to iterate over all updates in this buffer
|
FieldValueHitQueue<T extends FieldValueHitQueue.Entry> |
Expert: A hit queue for sorting by hits by terms in more than one field.
|
FieldValueHitQueue.Entry |
|
FieldValueHitQueue.MultiComparatorsFieldValueHitQueue<T extends FieldValueHitQueue.Entry> |
An implementation of FieldValueHitQueue which is optimized in case
there is more than one comparator.
|
FieldValueHitQueue.OneComparatorFieldValueHitQueue<T extends FieldValueHitQueue.Entry> |
An implementation of FieldValueHitQueue which is optimized in case
there is just one comparator.
|
FieldValuePairQueryNode<T> |
This interface should be implemented by QueryNode that holds a field
and an arbitrary value.
|
FileDictionary |
Dictionary represented by a text file.
|
FileSwitchDirectory |
Expert: A Directory instance that switches files between
two other Directory instances.
|
FilesystemResourceLoader |
Simple ResourceLoader that opens resource files
from the local file system, optionally resolving against
a base directory.
|
FilterBinaryDocValues |
|
FilterCodec |
A codec that forwards all its method calls to another codec.
|
FilterCodecReader |
A FilterCodecReader contains another CodecReader, which it
uses as its basic source of data, possibly transforming the data along the
way or providing additional functionality.
|
FilterCollector |
|
FilterDirectory |
Directory implementation that delegates calls to another directory.
|
FilterDirectoryReader |
A FilterDirectoryReader wraps another DirectoryReader, allowing implementations
to transform or extend it.
|
FilterDirectoryReader.SubReaderWrapper |
Factory class passed to FilterDirectoryReader constructor that allows
subclasses to wrap the filtered DirectoryReader's subreaders.
|
FilteredDocIdSetIterator |
Abstract decorator class of a DocIdSetIterator
implementation that provides on-demand filter/validation
mechanism on an underlying DocIdSetIterator.
|
FilteredIntervalsSource |
An IntervalsSource that filters the intervals from another IntervalsSource
|
FilteredIntervalsSource.MaxGaps |
|
FilteredIntervalsSource.MaxWidth |
|
FilteredTermsEnum |
Abstract class for enumerating a subset of all terms.
|
FilteredTermsEnum.AcceptStatus |
Return value, if term should be accepted or the iteration should
END .
|
FilteringIntervalIterator |
|
FilteringTokenFilter |
Abstract base class for TokenFilters that may remove tokens.
|
FilterIterator<T,InnerT extends T> |
An Iterator implementation that filters elements with a boolean predicate.
|
FilterLeafCollector |
|
FilterLeafReader |
A FilterLeafReader contains another LeafReader, which it
uses as its basic source of data, possibly transforming the data along the
way or providing additional functionality.
|
FilterLeafReader.FilterFields |
Base class for filtering Fields
implementations.
|
FilterLeafReader.FilterPostingsEnum |
|
FilterLeafReader.FilterTerms |
Base class for filtering Terms implementations.
|
FilterLeafReader.FilterTermsEnum |
Base class for filtering TermsEnum implementations.
|
FilterMatchesIterator |
A MatchesIterator that delegates all calls to another MatchesIterator
|
FilterMergePolicy |
|
FilterNumericDocValues |
|
FilterScorable |
Filter a Scorable , intercepting methods and optionally changing
their return values
The default implementation simply passes all calls to its delegate, with
the exception of Scorable.setMinCompetitiveScore(float) which defaults
to a no-op.
|
FilterScorer |
A FilterScorer contains another Scorer , which it
uses as its basic source of data, possibly transforming the data along the
way or providing additional functionality.
|
FilterSortedDocValues |
|
FilterSortedNumericDocValues |
|
FilterSortedSetDocValues |
|
FilterSpans |
|
FilterSpans.AcceptStatus |
Status returned from FilterSpans.accept(Spans) that indicates
whether a candidate match should be accepted, rejected, or rejected
and move on to the next document.
|
FilterWeight |
A FilterWeight contains another Weight and implements
all abstract methods by calling the contained weight's method.
|
FingerprintFilter |
Filter outputs a single token which is a concatenation of the sorted and
de-duplicated set of input tokens.
|
FingerprintFilterFactory |
|
FiniteStringsIterator |
Iterates all accepted strings.
|
FiniteStringsIterator.PathNode |
Nodes for path stack.
|
FinnishAnalyzer |
|
FinnishAnalyzer.DefaultSetHolder |
Atomically loads the DEFAULT_STOP_SET in a lazy fashion once the outer class
accesses the static final set the first time.;
|
FinnishLightStemFilter |
|
FinnishLightStemFilterFactory |
|
FinnishLightStemmer |
Light Stemmer for Finnish.
|
FinnishStemmer |
This class was automatically generated by a Snowball to Java compiler
It implements the stemming algorithm defined by a snowball script.
|
FirstPassGroupingCollector<T> |
FirstPassGroupingCollector is the first of two passes necessary
to collect grouped hits.
|
FixBrokenOffsetsFilter |
Deprecated.
|
FixBrokenOffsetsFilterFactory |
|
FixedBits |
Immutable twin of FixedBitSet.
|
FixedBitSet |
|
FixedFieldIntervalsSource |
|
FixedGapTermsIndexReader |
TermsIndexReader for simple every Nth terms indexes.
|
FixedGapTermsIndexWriter |
Selects every Nth term as and index term, and hold term
bytes (mostly) fully expanded in memory.
|
FixedLengthBytesRefArray |
|
FixedShingleFilter |
A FixedShingleFilter constructs shingles (token n-grams) from a token stream.
|
FixedShingleFilterFactory |
Factory for FixedShingleFilter
Parameters are:
shingleSize - how many tokens should be combined into each shingle (default: 2)
tokenSeparator - how tokens should be joined together in the shingle (default: space)
fillerToken - what should be added in place of stop words (default: _ )
|
FlagsAttribute |
This attribute can be used to pass different flags down the Tokenizer chain,
e.g.
|
FlagsAttributeImpl |
|
FlattenGraphFilter |
Converts an incoming graph token stream, such as one from
SynonymGraphFilter , into a flat form so that
all nodes form a single linear chain with no side paths.
|
FlattenGraphFilter.InputNode |
Holds all tokens leaving a given input position.
|
FlattenGraphFilter.OutputNode |
Gathers up merged input positions into a single output position,
only for the current "frontier" of nodes we've seen but can't yet
output because they are not frozen.
|
FlattenGraphFilterFactory |
|
FloatDocValues |
Abstract FunctionValues implementation which supports retrieving float values.
|
FloatDocValuesField |
Syntactic sugar for encoding floats as NumericDocValues
via Float.floatToRawIntBits(float) .
|
FloatEncoder |
Encode a character array Float as a BytesRef .
|
FloatFieldSource |
|
FloatPoint |
An indexed float field for fast range filters.
|
FloatPointMultiRangeBuilder |
Builder for multi range queries for FloatPoints
|
FloatPointNearestNeighbor |
KNN search on top of N dimensional indexed float points.
|
FloatPointNearestNeighbor.Cell |
|
FloatPointNearestNeighbor.NearestHit |
|
FloatPointNearestNeighbor.NearestVisitor |
|
FloatRange |
An indexed Float Range field.
|
FloatRangeDocValuesField |
DocValues field for FloatRange.
|
FloatRangeSlowRangeQuery |
|
FlushByRamOrCountsPolicy |
Default FlushPolicy implementation that flushes new segments based on
RAM used and document count depending on the IndexWriter's
IndexWriterConfig .
|
FlushInfo |
A FlushInfo provides information required for a FLUSH context.
|
FlushPolicy |
|
ForceNoBulkScoringQuery |
Query wrapper that forces its wrapped Query to use the default doc-by-doc
BulkScorer.
|
ForDeltaUtil |
Utility class to encode/decode increasing sequences of 128 integers.
|
Formatter |
Processes terms found in the original text, typically by applying some form
of mark-up to highlight terms in HTML search results pages.
|
ForUtil |
Encode all values in normal area with fixed bit width,
which is determined by the max value in this block.
|
ForUtil |
|
ForwardBytesReader |
Reads from a single byte[].
|
FragListBuilder |
FragListBuilder is an interface for FieldFragList builder classes.
|
Fragmenter |
Implements the policy for breaking text into multiple fragments for
consideration by the Highlighter class.
|
FragmentsBuilder |
|
FreeTextSuggester |
|
FrenchAnalyzer |
|
FrenchAnalyzer.DefaultSetHolder |
|
FrenchLightStemFilter |
|
FrenchLightStemFilterFactory |
|
FrenchLightStemmer |
Light Stemmer for French.
|
FrenchMinimalStemFilter |
|
FrenchMinimalStemFilterFactory |
|
FrenchMinimalStemmer |
Light Stemmer for French.
|
FrenchStemmer |
This class was automatically generated by a Snowball to Java compiler
It implements the stemming algorithm defined by a snowball script.
|
FreqProxFields |
Implements limited (iterators only, no stats) Fields interface over the in-RAM buffered
fields/terms/postings, to flush postings through the
PostingsFormat.
|
FreqProxFields.FreqProxDocsEnum |
|
FreqProxFields.FreqProxPostingsEnum |
|
FreqProxFields.FreqProxTerms |
|
FreqProxFields.FreqProxTermsEnum |
|
FreqProxTermsWriter |
|
FreqProxTermsWriterPerField |
|
FreqProxTermsWriterPerField.FreqProxPostingsArray |
|
FrequencyTrackingRingBuffer |
A ring buffer that tracks the frequency of the integers that it contains.
|
FrequencyTrackingRingBuffer.IntBag |
A bag of integers.
|
FrozenBufferedUpdates |
Holds buffered deletes and updates by term or query, once pushed.
|
FrozenBufferedUpdates.TermDocsIterator |
This class helps iterating a term dictionary and consuming all the docs for each terms.
|
FrozenBufferedUpdates.TermDocsIterator.TermsProvider |
|
FSDirectory |
Base class for Directory implementations that store index
files in the file system.
|
FSLockFactory |
Base class for file system based locking implementation.
|
FST<T> |
Represents an finite state machine (FST), using a
compact byte[] format.
|
FST.Arc<T> |
Represents a single arc.
|
FST.Arc.BitTable |
Helper methods to read the bit-table of a direct addressing node.
|
FST.BytesReader |
Reads bytes stored in an FST.
|
FST.INPUT_TYPE |
Specifies allowed range of each int input label for
this FST.
|
FSTCompletion |
Finite state automata based implementation of "autocomplete" functionality.
|
FSTCompletion.Completion |
A single completion for a given key.
|
FSTCompletionBuilder |
Finite state automata based implementation of "autocomplete" functionality.
|
FSTCompletionLookup |
|
FSTDictionary |
Immutable stateless FST -based index dictionary kept in memory.
|
FSTDictionary.BrowserSupplier |
|
FSTDictionary.Builder |
|
FSTEnum<T> |
Can next() and advance() through the terms in an FST
|
FSTOrdsOutputs |
A custom FST outputs implementation that stores block data
(BytesRef), long ordStart, long numTerms.
|
FSTOrdsOutputs.Output |
|
FSTPostingsFormat |
FST term dict + Lucene50PBF
|
FSTStore |
Abstraction for reading/writing bytes necessary for FST.
|
FSTTermOutputs |
|
FSTTermOutputs.TermData |
Represents the metadata for one term.
|
FSTTermsReader |
FST-based terms dictionary reader.
|
FSTTermsWriter |
FST-based term dict, using metadata as FST output.
|
FSTTermsWriter.FieldMetaData |
|
FSTUtil |
Exposes a utility method to enumerate all paths
intersecting an Automaton with an FST .
|
FSTUtil.Path<T> |
Holds a pair (automaton, fst) of states and accumulated output in the intersected machine.
|
FunctionMatchQuery |
A query that retrieves all documents with a DoubleValues value matching a predicate
This query works by a linear scan of the index, and is best used in
conjunction with other queries that can restrict the number of
documents visited
|
FunctionQuery |
Returns a score for each document based on a ValueSource,
often some function of the value of a field.
|
FunctionRangeQuery |
A Query wrapping a ValueSource that matches docs in which the values in the value source match a configured
range.
|
FunctionScoreQuery |
A query that wraps another query, and uses a DoubleValuesSource to
replace or modify the wrapped query's score
If the DoubleValuesSource doesn't return a value for a particular document,
then that document will be given a score of 0.
|
FunctionScoreQuery.FunctionScoreWeight |
|
FunctionScoreQuery.MultiplicativeBoostValuesSource |
|
FunctionScoreQuery.QueryBoostValuesSource |
|
FunctionValues |
Represents field values as different types.
|
FunctionValues.ValueFiller |
Abstraction of the logic required to fill the value of a specified doc into
a reusable MutableValue .
|
FutureArrays |
|
FutureObjects |
|
FuzzyAutomatonBuilder |
Builds a set of CompiledAutomaton for fuzzy matching on a given term,
with specified maximum edit distance, fixed prefix and whether or not
to allow transpositions.
|
FuzzyCompletionQuery |
A CompletionQuery that match documents containing terms
within an edit distance of the specified prefix.
|
FuzzyCompletionQuery.FuzzyCompletionWeight |
|
FuzzyConfig |
|
FuzzyLikeThisQuery |
Fuzzifies ALL terms provided as strings and then picks the best n differentiating terms.
|
FuzzyLikeThisQuery.FieldVals |
|
FuzzyLikeThisQuery.ScoreTerm |
|
FuzzyLikeThisQuery.ScoreTermQueue |
|
FuzzyLikeThisQueryBuilder |
|
FuzzyQuery |
Implements the fuzzy search query.
|
FuzzyQueryNode |
A FuzzyQueryNode represents a element that contains
field/text/similarity tuple
|
FuzzyQueryNodeBuilder |
|
FuzzyQueryNodeProcessor |
|
FuzzySet |
A class used to represent a set of many, potentially large, values (e.g.
|
FuzzySet.ContainsResult |
|
FuzzySuggester |
|
FuzzyTermsEnum |
Subclass of TermsEnum for enumerating all terms that are similar
to the specified filter term.
|
FuzzyTermsEnum.AutomatonAttribute |
Used for sharing automata between segments
Levenshtein automata are large and expensive to build; we don't want to build
them directly on the query because this can blow up caches that use queries
as keys; we also don't want to rebuild them for every segment.
|
FuzzyTermsEnum.AutomatonAttributeImpl |
|
FuzzyTermsEnum.FuzzyTermsException |
Thrown to indicate that there was an issue creating a fuzzy query for a given term.
|
GalicianAnalyzer |
|
GalicianAnalyzer.DefaultSetHolder |
Atomically loads the DEFAULT_STOP_SET in a lazy fashion once the outer class
accesses the static final set the first time.;
|
GalicianMinimalStemFilter |
|
GalicianMinimalStemFilterFactory |
|
GalicianMinimalStemmer |
Minimal Stemmer for Galician
|
GalicianStemFilter |
|
GalicianStemFilterFactory |
|
GalicianStemmer |
Galician stemmer implementing "Regras do lematizador para o galego".
|
Gener |
The Gener object helps in the discarding of nodes which break the reduction
effort and defend the structure against large reductions.
|
GenericTermsCollector |
|
GeoEncodingUtils |
reusable geopoint encoding methods
|
GeoEncodingUtils.DistancePredicate |
A predicate that checks whether a given point is within a distance of another point.
|
GeoEncodingUtils.Grid |
|
GeoEncodingUtils.PolygonPredicate |
A predicate that checks whether a given point is within a polygon.
|
GeoUtils |
Basic reusable geo-spatial utility methods
|
GeoUtils.WindingOrder |
used to define the orientation of 3 points
-1 = Clockwise
0 = Colinear
1 = Counter-clockwise
|
German2Stemmer |
This class was automatically generated by a Snowball to Java compiler
It implements the stemming algorithm defined by a snowball script.
|
GermanAnalyzer |
|
GermanAnalyzer.DefaultSetHolder |
|
GermanLightStemFilter |
|
GermanLightStemFilterFactory |
|
GermanLightStemmer |
Light Stemmer for German.
|
GermanMinimalStemFilter |
|
GermanMinimalStemFilterFactory |
|
GermanMinimalStemmer |
Minimal Stemmer for German.
|
GermanNormalizationFilter |
|
GermanNormalizationFilterFactory |
|
GermanStemFilter |
|
GermanStemFilterFactory |
|
GermanStemmer |
A stemmer for German words.
|
GermanStemmer |
This class was automatically generated by a Snowball to Java compiler
It implements the stemming algorithm defined by a snowball script.
|
GetTermInfo |
Utility to get document frequency and total number of occurrences (sum of the tf for each doc) of a term.
|
GlobalOrdinalsCollector |
A collector that collects all ordinals from a specified field matching the query.
|
GlobalOrdinalsQuery |
|
GlobalOrdinalsQuery.OrdinalMapScorer |
|
GlobalOrdinalsQuery.SegmentOrdinalScorer |
|
GlobalOrdinalsWithScoreCollector |
|
GlobalOrdinalsWithScoreCollector.Avg |
|
GlobalOrdinalsWithScoreCollector.Max |
|
GlobalOrdinalsWithScoreCollector.Min |
|
GlobalOrdinalsWithScoreCollector.NoScore |
|
GlobalOrdinalsWithScoreCollector.Occurrences |
|
GlobalOrdinalsWithScoreCollector.Scores |
|
GlobalOrdinalsWithScoreCollector.Sum |
|
GlobalOrdinalsWithScoreQuery |
|
GlobalOrdinalsWithScoreQuery.OrdinalMapScorer |
|
GlobalOrdinalsWithScoreQuery.SegmentOrdinalScorer |
|
GradientFormatter |
Formats text with different color intensity depending on the score of the
term.
|
GraphTokenFilter |
|
GraphTokenFilter.Token |
|
GraphTokenStreamFiniteStrings |
|
GraphvizFormatter |
Outputs the dot (graphviz) string for the viterbi lattice.
|
GraphvizFormatter |
Outputs the dot (graphviz) string for the viterbi lattice.
|
GreekAnalyzer |
|
GreekAnalyzer.DefaultSetHolder |
|
GreekLowerCaseFilter |
Normalizes token text to lower case, removes some Greek diacritics,
and standardizes final sigma to sigma.
|
GreekLowerCaseFilterFactory |
|
GreekStemFilter |
|
GreekStemFilterFactory |
|
GreekStemmer |
A stemmer for Greek words, according to: Development of a Stemmer for the
Greek Language. Georgios Ntais
|
GroupDocs<T> |
Represents one group in the results.
|
GroupFacetCollector |
Base class for computing grouped facets.
|
GroupFacetCollector.FacetEntry |
Represents a facet entry with a value and a count.
|
GroupFacetCollector.GroupedFacetResult |
The grouped facet result.
|
GroupFacetCollector.SegmentResult |
Contains the local grouped segment counts for a particular segment.
|
GroupFacetCollector.SegmentResultPriorityQueue |
|
GroupingSearch |
Convenience class to perform grouping in a non distributed environment.
|
GroupQueryNode |
A GroupQueryNode represents a location where the original user typed
real parenthesis on the query string.
|
GroupQueryNodeBuilder |
|
GroupReducer<T,C extends Collector> |
Concrete implementations of this class define what to collect for individual
groups during the second-pass of a grouping search.
|
GroupReducer.GroupCollector<C extends Collector> |
|
GroupSelector<T> |
Defines a group, for use by grouping collectors
A GroupSelector acts as an iterator over documents.
|
GroupSelector.State |
What to do with the current value
|
GrowableByteArrayDataOutput |
|
GrowableWriter |
Implements PackedInts.Mutable , but grows the
bit count of the underlying packed ints on-demand.
|
HalfFloatPoint |
An indexed half-float field for fast range filters.
|
HardlinkCopyDirectoryWrapper |
|
HashFunction |
Base class for hashing functions that can be referred to by name.
|
HeapPointReader |
Utility class to read buffered points from in-heap arrays.
|
HeapPointReader.HeapPointValue |
Reusable implementation for a point value on-heap
|
HeapPointWriter |
Utility class to write new points into in-heap arrays.
|
HHMMSegmenter |
Finds the optimal segmentation of a sentence into Chinese words
|
HighFreqTerms |
HighFreqTerms class extracts the top n most frequent terms
(by document frequency) from an existing Lucene index and reports their
document frequency.
|
HighFreqTerms.DocFreqComparator |
Compares terms by docTermFreq
|
HighFreqTerms.TermStatsQueue |
Priority queue for TermStats objects
|
HighFreqTerms.TotalTermFreqComparator |
Compares terms by totalTermFreq
|
HighFrequencyDictionary |
HighFrequencyDictionary: terms taken from the given field
of a Lucene index, which appear in a number of documents
above a given threshold.
|
Highlighter |
|
Highlighter.FragmentQueue |
|
HighlightsMatch |
QueryMatch object that contains the hit positions of a matching Query
|
HighlightsMatch.Hit |
Represents an individual hit
|
HindiAnalyzer |
Analyzer for Hindi.
|
HindiAnalyzer.DefaultSetHolder |
Atomically loads the DEFAULT_STOP_SET in a lazy fashion once the outer class
accesses the static final set the first time.;
|
HindiNormalizationFilter |
|
HindiNormalizationFilterFactory |
|
HindiNormalizer |
Normalizer for Hindi.
|
HindiStemFilter |
|
HindiStemFilterFactory |
|
HindiStemmer |
Light Stemmer for Hindi.
|
HitQueue |
|
HitsThresholdChecker |
Used for defining custom algorithms to allow searches to early terminate
|
HitsThresholdChecker.GlobalHitsThresholdChecker |
Implementation of HitsThresholdChecker which allows global hit counting
|
HitsThresholdChecker.LocalHitsThresholdChecker |
Default implementation of HitsThresholdChecker to be used for single threaded execution
|
HMMChineseTokenizer |
Tokenizer for Chinese or mixed Chinese-English text.
|
HMMChineseTokenizerFactory |
|
HTMLStripCharFilter |
A CharFilter that wraps another Reader and attempts to strip out HTML constructs.
|
HTMLStripCharFilter.TextSegment |
|
HTMLStripCharFilterFactory |
|
HungarianAnalyzer |
|
HungarianAnalyzer.DefaultSetHolder |
Atomically loads the DEFAULT_STOP_SET in a lazy fashion once the outer class
accesses the static final set the first time.;
|
HungarianLightStemFilter |
|
HungarianLightStemFilterFactory |
|
HungarianLightStemmer |
Light Stemmer for Hungarian.
|
HungarianStemmer |
This class was automatically generated by a Snowball to Java compiler
It implements the stemming algorithm defined by a snowball script.
|
HunspellStemFilter |
TokenFilter that uses hunspell affix rules and words to stem tokens.
|
HunspellStemFilterFactory |
|
Hyphen |
This class represents a hyphen.
|
HyphenatedWordsFilter |
When the plain text is extracted from documents, we will often have many words hyphenated and broken into
two lines.
|
HyphenatedWordsFilterFactory |
|
Hyphenation |
This class represents a hyphenated word.
|
HyphenationCompoundWordTokenFilter |
A TokenFilter that decomposes compound words found in many Germanic languages.
|
HyphenationCompoundWordTokenFilterFactory |
|
HyphenationTree |
This tree structure stores the hyphenation patterns in an efficient way for
fast lookup.
|
IBSimilarity |
Provides a framework for the family of information-based models, as described
in Stéphane Clinchant and Eric Gaussier.
|
ICUCollatedTermAttributeImpl |
Extension of CharTermAttributeImpl that encodes the term
text as a binary Unicode collation key instead of as UTF-8 bytes.
|
ICUCollationAttributeFactory |
Converts each token into its CollationKey , and
then encodes bytes as an index term.
|
ICUCollationDocValuesField |
|
ICUCollationKeyAnalyzer |
|
ICUFoldingFilter |
A TokenFilter that applies search term folding to Unicode text,
applying foldings from UTR#30 Character Foldings.
|
ICUFoldingFilterFactory |
|
ICUNormalizer2CharFilter |
Normalize token text with ICU's Normalizer2 .
|
ICUNormalizer2CharFilterFactory |
|
ICUNormalizer2Filter |
Normalize token text with ICU's Normalizer2
|
ICUNormalizer2FilterFactory |
|
ICUTokenizer |
Breaks text into words according to UAX #29: Unicode Text Segmentation
(http://www.unicode.org/reports/tr29/)
|
ICUTokenizerConfig |
Class that allows for tailored Unicode Text Segmentation on
a per-writing system basis.
|
ICUTokenizerFactory |
|
ICUTransformFilter |
|
ICUTransformFilter.ReplaceableTermAttribute |
|
ICUTransformFilterFactory |
|
IdentityEncoder |
Does nothing other than convert the char array to a byte array using the specified encoding.
|
IDFValueSource |
|
IDVersionPostingsFormat |
|
IDVersionPostingsReader |
|
IDVersionPostingsWriter |
|
IDVersionSegmentTermsEnum |
|
IDVersionSegmentTermsEnumFrame |
|
IDVersionTermState |
|
IfFunction |
Depending on the boolean value of the ifSource function,
returns the value of the trueSource or falseSource function.
|
Impact |
Per-document scoring factors.
|
Impacts |
Information about upcoming impacts, ie.
|
ImpactsDISI |
|
ImpactsEnum |
Extension of PostingsEnum which also provides information about
upcoming impacts.
|
ImpactsSource |
|
Independence |
Computes the measure of divergence from independence for DFI
scoring functions.
|
IndependenceChiSquared |
Normalized chi-squared measure of distance from independence
|
IndependenceSaturated |
Saturated measure of distance from independence
|
IndependenceStandardized |
Standardized measure of distance from independence
|
IndexableField |
Represents a single field for indexing.
|
IndexableFieldType |
Describes the properties of a field.
|
IndexCommit |
|
IndexDeletionPolicy |
|
IndexDictionary |
Immutable stateless index dictionary kept in RAM.
|
IndexDictionary.Browser |
|
IndexDictionary.BrowserSupplier |
|
IndexDictionary.Builder |
|
IndexedDISI |
Disk-based implementation of a DocIdSetIterator which can return
the index of the current document, i.e.
|
IndexedDISI |
Disk-based implementation of a DocIdSetIterator which can return
the index of the current document, i.e.
|
IndexedDISI.Method |
|
IndexedDISI.Method |
|
IndexFileDeleter |
|
IndexFileDeleter.CommitPoint |
Holds details for each commit point.
|
IndexFileDeleter.RefCount |
Tracks the reference count for a single index file:
|
IndexFileNames |
This class contains useful constants representing filenames and extensions
used by lucene, as well as convenience methods for querying whether a file
name matches an extension ( matchesExtension ), as well as generating file names from a segment name,
generation and extension (
fileNameFromGeneration ,
segmentFileName ).
|
IndexFormatTooNewException |
This exception is thrown when Lucene detects
an index that is newer than this Lucene version.
|
IndexFormatTooOldException |
This exception is thrown when Lucene detects
an index that is too old for this Lucene version
|
IndexInput |
Abstract base class for input from a file in a Directory .
|
IndexMergeTool |
Merges indices specified on the command line into the index
specified as the first command line argument.
|
IndexNotFoundException |
Signals that no index was found in the Directory.
|
IndexOptions |
Controls how much information is stored in the postings lists.
|
IndexOrDocValuesQuery |
A query that uses either an index structure (points or terms) or doc values
in order to run a query, depending which one is more efficient.
|
IndexOutput |
|
IndexReader |
IndexReader is an abstract class, providing an interface for accessing a
point-in-time view of an index.
|
IndexReader.CacheHelper |
A utility class that gives hooks in order to help build a cache based on
the data that is contained in this index.
|
IndexReader.CacheKey |
A cache key identifying a resource that is being cached on.
|
IndexReader.ClosedListener |
A listener that is called when a resource gets closed.
|
IndexReaderContext |
A struct like class that represents a hierarchical relationship between
IndexReader instances.
|
IndexReaderFunctions |
Class exposing static helper methods for generating DoubleValuesSource instances
over some IndexReader statistics
|
IndexReaderFunctions.IndexReaderDoubleValuesSource |
|
IndexReaderFunctions.NoCacheConstantDoubleValuesSource |
|
IndexReaderFunctions.NoCacheConstantLongValuesSource |
|
IndexReaderFunctions.ReaderFunction |
|
IndexReaderFunctions.SumTotalTermFreqValuesSource |
|
IndexReaderFunctions.TermFreqDoubleValuesSource |
|
IndexSearcher |
Implements search over a single IndexReader.
|
IndexSearcher.LeafSlice |
A class holding a subset of the IndexSearcher s leaf contexts to be
executed within a single thread.
|
IndexSorter |
Handles how documents should be sorted in an index, both within a segment and between
segments.
|
IndexSorter.ComparableProvider |
Used for sorting documents across segments
|
IndexSorter.DocComparator |
A comparator of doc IDs, used for sorting documents within a segment
|
IndexSorter.DoubleSorter |
Sorts documents based on double values from a NumericDocValues instance
|
IndexSorter.FloatSorter |
Sorts documents based on float values from a NumericDocValues instance
|
IndexSorter.IntSorter |
Sorts documents based on integer values from a NumericDocValues instance
|
IndexSorter.LongSorter |
Sorts documents based on long values from a NumericDocValues instance
|
IndexSorter.NumericDocValuesProvider |
Provide a NumericDocValues instance for a LeafReader
|
IndexSorter.SortedDocValuesProvider |
Provide a SortedDocValues instance for a LeafReader
|
IndexSorter.StringSorter |
Sorts documents based on terms from a SortedDocValues instance
|
IndexSortSortedNumericDocValuesRangeQuery |
A range query that can take advantage of the fact that the index is sorted to speed up
execution.
|
IndexSortSortedNumericDocValuesRangeQuery.BoundedDocSetIdIterator |
A doc ID set iterator that wraps a delegate iterator and only returns doc IDs in
the range [firstDocInclusive, lastDoc).
|
IndexSortSortedNumericDocValuesRangeQuery.ValueComparator |
Compares the given document's value with a stored reference value.
|
IndexSplitter |
Command-line tool that enables listing segments in an
index, copying specific segments to another index, and
deleting segments from an index.
|
IndexUpgrader |
This is an easy-to-use tool that upgrades all segments of an index from previous Lucene versions
to the current segment file format.
|
IndexWriter |
An IndexWriter creates and maintains an index.
|
IndexWriter.DocModifier |
|
IndexWriter.DocStats |
DocStats for this index
|
IndexWriter.Event |
Interface for internal atomic events.
|
IndexWriter.EventQueue |
|
IndexWriter.IndexReaderWarmer |
If DirectoryReader.open(IndexWriter) has
been called (ie, this writer is in near real-time
mode), then after a merge completes, this class can be
invoked to warm the reader on the newly merged
segment, before the merge commits.
|
IndexWriter.IndexWriterMergeSource |
|
IndexWriterConfig |
Holds all the configuration that is used to create an IndexWriter .
|
IndexWriterConfig.OpenMode |
|
IndicNormalizationFilter |
|
IndicNormalizationFilterFactory |
|
IndicNormalizer |
Normalizes the Unicode representation of text in Indian languages.
|
IndicNormalizer.ScriptData |
|
IndonesianAnalyzer |
Analyzer for Indonesian (Bahasa)
|
IndonesianAnalyzer.DefaultSetHolder |
Atomically loads the DEFAULT_STOP_SET in a lazy fashion once the outer class
accesses the static final set the first time.;
|
IndonesianStemFilter |
|
IndonesianStemFilterFactory |
|
IndonesianStemmer |
Stemmer for Indonesian.
|
InetAddressPoint |
An indexed 128-bit InetAddress field.
|
InetAddressRange |
An indexed InetAddress Range Field
|
InflectionAttribute |
Attribute for Kuromoji inflection data.
|
InflectionAttributeImpl |
Attribute for Kuromoji inflection data.
|
InfoStream |
|
InfoStream.NoOutput |
|
InMemorySorter |
|
InPlaceMergeSorter |
Sorter implementation based on the merge-sort algorithm that merges
in place (no extra memory will be allocated).
|
InputIterator |
|
InputIterator.InputIteratorWrapper |
Wraps a BytesRefIterator as a suggester InputIterator, with all weights
set to 1 and carries no payload
|
InputStreamDataInput |
|
IntArrayDocIdSet |
|
IntArrayDocIdSet.IntArrayDocIdSetIterator |
|
IntBlockPool |
|
IntBlockPool.Allocator |
Abstract class for allocating and freeing int
blocks.
|
IntBlockPool.DirectAllocator |
|
IntBlockPool.SliceReader |
|
IntBlockPool.SliceWriter |
|
IntDocValues |
Abstract FunctionValues implementation which supports retrieving int values.
|
IntegerEncoder |
Encode a character array Integer as a BytesRef .
|
IntersectBlockReader |
|
IntersectBlockReader.BlockIteration |
Block iteration order.
|
IntersectTermsEnum |
|
IntersectTermsEnum.NoMoreTermsException |
|
IntersectTermsEnumFrame |
|
IntervalFilter |
|
IntervalIterator |
A DocIdSetIterator that also allows iteration over matching
intervals in a document.
|
IntervalMatches |
|
IntervalMatches.State |
|
IntervalMatchesIterator |
|
IntervalQuery |
A query that retrieves documents containing intervals returned from an
IntervalsSource
Static constructor functions for various different sources can be found in the
Intervals class
Scores for this query are computed as a function of the sloppy frequency of
intervals appearing in a particular document.
|
Intervals |
|
IntervalScoreFunction |
|
IntervalScoreFunction.SaturationFunction |
|
IntervalScoreFunction.SigmoidFunction |
|
IntervalScorer |
|
IntervalsSource |
A helper class for IntervalQuery that provides an IntervalIterator
for a given field and segment
Static constructor functions for various different sources can be found in the
Intervals class
|
IntFieldSource |
|
IntPoint |
An indexed int field for fast range filters.
|
IntPointMultiRangeBuilder |
Builder for multi range queries for IntPoints
|
IntRange |
An indexed Integer Range field.
|
IntRangeDocValuesField |
DocValues field for IntRange.
|
IntRangeSlowRangeQuery |
|
IntroSelector |
Implementation of the quick select algorithm.
|
IntroSorter |
Sorter implementation based on a variant of the quicksort algorithm
called introsort: when
the recursion level exceeds the log of the length of the array to sort, it
falls back to heapsort.
|
IntSequenceOutputs |
An FST Outputs implementation where each output
is a sequence of ints.
|
IntsRef |
Represents int[], as a slice (offset + length) into an
existing int[].
|
IntsRefBuilder |
|
IntsRefFSTEnum<T> |
Enumerates all input (IntsRef) + output pairs in an
FST.
|
IntsRefFSTEnum.InputOutput<T> |
Holds a single input (IntsRef) + output pair.
|
InvalidTokenOffsetsException |
Exception thrown if TokenStream Tokens are incompatible with provided text
|
IOContext |
IOContext holds additional details on the merge/search context.
|
IOContext.Context |
Context is a enumerator which specifies the context in which the Directory
is being used for.
|
IOSupplier<T> |
This is a result supplier that is allowed to throw an IOException.
|
IOUtils |
This class emulates the new Java 7 "Try-With-Resources" statement.
|
IOUtils.IOConsumer<T> |
An IO operation with a single input.
|
IOUtils.IOFunction<T,R> |
A Function that may throw an IOException
|
IrishAnalyzer |
|
IrishAnalyzer.DefaultSetHolder |
Atomically loads the DEFAULT_STOP_SET in a lazy fashion once the outer class
accesses the static final set the first time.;
|
IrishLowerCaseFilter |
Normalises token text to lower case, handling t-prothesis
and n-eclipsis (i.e., that 'nAthair' should become 'n-athair')
|
IrishLowerCaseFilterFactory |
|
IrishStemmer |
This class was automatically generated by a Snowball to Java compiler
It implements the stemming algorithm defined by a snowball script.
|
ISO8859_14Decoder |
|
ItalianAnalyzer |
|
ItalianAnalyzer.DefaultSetHolder |
Atomically loads the DEFAULT_STOP_SET in a lazy fashion once the outer class
accesses the static final set the first time.;
|
ItalianLightStemFilter |
|
ItalianLightStemFilterFactory |
|
ItalianLightStemmer |
Light Stemmer for Italian.
|
ItalianStemmer |
This class was automatically generated by a Snowball to Java compiler
It implements the stemming algorithm defined by a snowball script.
|
JapaneseAnalyzer |
Analyzer for Japanese that uses morphological analysis.
|
JapaneseAnalyzer.DefaultSetHolder |
Atomically loads DEFAULT_STOP_SET, DEFAULT_STOP_TAGS in a lazy fashion once the
outer class accesses the static final set the first time.
|
JapaneseBaseFormFilter |
|
JapaneseBaseFormFilterFactory |
|
JapaneseIterationMarkCharFilter |
Normalizes Japanese horizontal iteration marks (odoriji) to their expanded form.
|
JapaneseIterationMarkCharFilterFactory |
|
JapaneseKatakanaStemFilter |
A TokenFilter that normalizes common katakana spelling variations
ending in a long sound character by removing this character (U+30FC).
|
JapaneseKatakanaStemFilterFactory |
|
JapaneseNumberFilter |
A TokenFilter that normalizes Japanese numbers (kansūji) to regular Arabic
decimal numbers in half-width characters.
|
JapaneseNumberFilter.NumberBuffer |
Buffer that holds a Japanese number string and a position index used as a parsed-to marker
|
JapaneseNumberFilterFactory |
|
JapanesePartOfSpeechStopFilter |
Removes tokens that match a set of part-of-speech tags.
|
JapanesePartOfSpeechStopFilterFactory |
|
JapaneseReadingFormFilter |
A TokenFilter that replaces the term
attribute with the reading of a token in either katakana or romaji form.
|
JapaneseReadingFormFilterFactory |
|
JapaneseTokenizer |
Tokenizer for Japanese that uses morphological analysis.
|
JapaneseTokenizer.Lattice |
|
JapaneseTokenizer.Mode |
Tokenization mode: this determines how the tokenizer handles
compound and unknown words.
|
JapaneseTokenizer.Position |
|
JapaneseTokenizer.Type |
Token type reflecting the original source of this token
|
JapaneseTokenizer.WrappedPositionArray |
|
JapaneseTokenizerFactory |
|
JaroWinklerDistance |
Similarity measure for short strings such as person names.
|
JaspellLookup |
Deprecated.
|
JaspellTernarySearchTrie |
Deprecated.
|
JaspellTernarySearchTrie.TSTNode |
An inner class of Ternary Search Trie that represents a node in the trie.
|
JavascriptBaseVisitor<T> |
This class provides an empty implementation of JavascriptVisitor ,
which can be extended to create a visitor which only needs to handle a subset
of the available methods.
|
JavascriptCompiler |
An expression compiler for javascript expressions.
|
JavascriptCompiler.Loader |
|
JavascriptErrorHandlingLexer |
Overrides the ANTLR 4 generated JavascriptLexer to allow for proper error handling
|
JavascriptLexer |
|
JavascriptParser |
|
JavascriptParser.AddsubContext |
|
JavascriptParser.BoolandContext |
|
JavascriptParser.BoolcompContext |
|
JavascriptParser.BooleqneContext |
|
JavascriptParser.BoolorContext |
|
JavascriptParser.BwandContext |
|
JavascriptParser.BworContext |
|
JavascriptParser.BwshiftContext |
|
JavascriptParser.BwxorContext |
|
JavascriptParser.CompileContext |
|
JavascriptParser.ConditionalContext |
|
JavascriptParser.ExpressionContext |
|
JavascriptParser.ExternalContext |
|
JavascriptParser.MuldivContext |
|
JavascriptParser.NumericContext |
|
JavascriptParser.PrecedenceContext |
|
JavascriptParser.UnaryContext |
|
JavascriptParserErrorStrategy |
Allows for proper error handling in the ANTLR 4 parser
|
JavascriptVisitor<T> |
This interface defines a complete generic visitor for a parse tree produced
by JavascriptParser .
|
JoinDocFreqValueSource |
Use a field value and find the Document Frequency within another field.
|
JoinUtil |
Utility for query time joining.
|
KeepOnlyLastCommitDeletionPolicy |
This IndexDeletionPolicy implementation that
keeps only the most recent commit and immediately removes
all prior commits after a new commit is done.
|
KeepWordFilter |
A TokenFilter that only keeps tokens with text contained in the
required words.
|
KeepWordFilterFactory |
|
KeywordAnalyzer |
"Tokenizes" the entire stream as a single token.
|
KeywordAttribute |
This attribute can be used to mark a token as a keyword.
|
KeywordAttributeImpl |
|
KeywordMarkerFilter |
|
KeywordMarkerFilterFactory |
|
KeywordRepeatFilter |
This TokenFilter emits each incoming token twice once as keyword and once non-keyword, in other words once with
KeywordAttribute.setKeyword(boolean) set to true and once set to false .
|
KeywordRepeatFilterFactory |
|
KeywordTokenizer |
Emits the entire input as a single token.
|
KeywordTokenizerFactory |
|
KNearestFuzzyClassifier |
|
KNearestNeighborClassifier |
A k-Nearest Neighbor classifier (see http://en.wikipedia.org/wiki/K-nearest_neighbors ) based
on MoreLikeThis
|
KNearestNeighborDocumentClassifier |
A k-Nearest Neighbor Document classifier (see http://en.wikipedia.org/wiki/K-nearest_neighbors ) based
on MoreLikeThis .
|
KoreanAnalyzer |
Analyzer for Korean that uses morphological analysis.
|
KoreanNumberFilter |
A TokenFilter that normalizes Korean numbers to regular Arabic
decimal numbers in half-width characters.
|
KoreanNumberFilter.NumberBuffer |
Buffer that holds a Korean number string and a position index used as a parsed-to marker
|
KoreanNumberFilterFactory |
|
KoreanPartOfSpeechStopFilter |
Removes tokens that match a set of part-of-speech tags.
|
KoreanPartOfSpeechStopFilterFactory |
|
KoreanReadingFormFilter |
Replaces term text with the ReadingAttribute which is
the Hangul transcription of Hanja characters.
|
KoreanReadingFormFilterFactory |
|
KoreanTokenizer |
Tokenizer for Korean that uses morphological analysis.
|
KoreanTokenizer.DecompoundMode |
|
KoreanTokenizer.Position |
|
KoreanTokenizer.Type |
Token type reflecting the original source of this token
|
KoreanTokenizer.WrappedPositionArray |
|
KoreanTokenizerFactory |
|
KpStemmer |
This class was automatically generated by a Snowball to Java compiler
It implements the stemming algorithm defined by a snowball script.
|
KStemData1 |
A list of words used by Kstem
|
KStemData2 |
A list of words used by Kstem
|
KStemData3 |
A list of words used by Kstem
|
KStemData4 |
A list of words used by Kstem
|
KStemData5 |
A list of words used by Kstem
|
KStemData6 |
A list of words used by Kstem
|
KStemData7 |
A list of words used by Kstem
|
KStemData8 |
A list of words used by Kstem
|
KStemFilter |
A high-performance kstem filter for english.
|
KStemFilterFactory |
|
KStemmer |
This class implements the Kstem algorithm
|
KStemmer.DictEntry |
|
LabelledCharArrayMatcher |
Associates a label with a CharArrayMatcher to distinguish different sources for terms in highlighting
|
Lambda |
The lambda (λw) parameter in information-based
models.
|
LambdaDF |
Computes lambda as docFreq+1 / numberOfDocuments+1 .
|
LambdaTTF |
Computes lambda as totalTermFreq+1 / numberOfDocuments+1 .
|
LargeNumHitsTopDocsCollector |
Optimized collector for large number of hits.
|
LatLonBoundingBox |
An indexed 2-Dimension Bounding Box field for the Geospatial Lat/Lon Coordinate system
|
LatLonDocValuesBoxQuery |
|
LatLonDocValuesDistanceQuery |
|
LatLonDocValuesField |
An per-document location field.
|
LatLonDocValuesPointInPolygonQuery |
|
LatLonGeometry |
Lat/Lon Geometry object.
|
LatLonPoint |
An indexed location field.
|
LatLonPointDistanceComparator |
Compares documents by distance from an origin point
|
LatLonPointDistanceFeatureQuery |
|
LatLonPointDistanceQuery |
|
LatLonPointInPolygonQuery |
Finds all previously indexed points that fall within the specified polygons.
|
LatLonPointPrototypeQueries |
Holder class for prototype sandboxed queries
When the query graduates from sandbox, these static calls should be
placed in LatLonPoint
|
LatLonPointSortField |
Sorts by distance from an origin location.
|
LatLonShape |
An geo shape utility class for indexing and searching gis geometries
whose vertices are latitude, longitude values (in decimal degrees).
|
LatLonShapeBoundingBoxQuery |
Finds all previously indexed geo shapes that intersect the specified bounding box.
|
LatLonShapeBoundingBoxQuery.EncodedRectangle |
Holds spatial logic for a bounding box that works in the encoded space
|
LatLonShapeQuery |
|
LatvianAnalyzer |
|
LatvianAnalyzer.DefaultSetHolder |
Atomically loads the DEFAULT_STOP_SET in a lazy fashion once the outer class
accesses the static final set the first time.;
|
LatvianStemFilter |
|
LatvianStemFilterFactory |
|
LatvianStemmer |
Light stemmer for Latvian.
|
LatvianStemmer.Affix |
|
LazyDocument |
Defers actually loading a field's value until you ask
for it.
|
LeafCollector |
Collector decouples the score from the collected doc:
the score computation is skipped entirely if it's not
needed.
|
LeafFieldComparator |
Expert: comparator that gets instantiated on each leaf
from a top-level FieldComparator instance.
|
LeafMetaData |
Provides read-only metadata about a leaf.
|
LeafReader |
LeafReader is an abstract class, providing an interface for accessing an
index.
|
LeafReaderContext |
|
LeafSimScorer |
|
LegacyBinaryDocValues |
Deprecated.
|
LegacyBinaryDocValuesWrapper |
Deprecated.
|
LegacyBM25Similarity |
Deprecated.
|
LegacyDocValuesIterables |
Bridge helper methods for legacy codecs to map sorted doc values to iterables.
|
LegacyFieldsIndexReader |
|
LegacyNumericDocValues |
Deprecated.
|
LegacyNumericDocValuesWrapper |
Deprecated.
|
LegacySortedDocValues |
Deprecated.
|
LegacySortedDocValuesWrapper |
Deprecated.
|
LegacySortedNumericDocValues |
Deprecated.
|
LegacySortedNumericDocValuesWrapper |
Deprecated.
|
LegacySortedSetDocValues |
Deprecated.
|
LegacySortedSetDocValuesWrapper |
Deprecated.
|
LengthFilter |
Removes words that are too long or too short from the stream.
|
LengthFilterFactory |
|
LengthGoalBreakIterator |
Wraps another BreakIterator to skip past breaks that would result in passages that are too
short.
|
LetterTokenizer |
A LetterTokenizer is a tokenizer that divides text at non-letters.
|
LetterTokenizerFactory |
|
Lev1ParametricDescription |
Parametric description for generating a Levenshtein automaton of degree 1
|
Lev1TParametricDescription |
Parametric description for generating a Levenshtein automaton of degree 1,
with transpositions as primitive edits
|
Lev2ParametricDescription |
Parametric description for generating a Levenshtein automaton of degree 2
|
Lev2TParametricDescription |
Parametric description for generating a Levenshtein automaton of degree 2,
with transpositions as primitive edits
|
LevenshteinAutomata |
Class to construct DFAs that match a word within some edit distance.
|
LevenshteinAutomata.ParametricDescription |
A ParametricDescription describes the structure of a Levenshtein DFA for some degree n.
|
LevenshteinDistance |
Levenshtein edit distance class.
|
Lift |
The Lift class is a data structure that is a variation of a Patricia trie.
|
LikeThisQueryBuilder |
|
LimitedFiniteStringsIterator |
|
LimitTokenCountAnalyzer |
This Analyzer limits the number of tokens while indexing.
|
LimitTokenCountFilter |
This TokenFilter limits the number of tokens while indexing.
|
LimitTokenCountFilterFactory |
|
LimitTokenOffsetFilter |
Lets all tokens pass through until it sees one with a start offset <= a
configured limit, which won't pass and ends the stream.
|
LimitTokenOffsetFilter |
This is a simplified version of org.apache.lucene.analysis.miscellaneous.LimitTokenOffsetFilter to prevent
a dependency on analyzers-common.jar.
|
LimitTokenOffsetFilterFactory |
|
LimitTokenPositionFilter |
This TokenFilter limits its emitted tokens to those with positions that
are not greater than the configured limit.
|
LimitTokenPositionFilterFactory |
|
Line |
Represents a line on the earth's surface.
|
Line2D |
2D geo line implementation represented as a balanced interval tree of edges.
|
LinearFloatFunction |
LinearFloatFunction implements a linear function over
another ValueSource .
|
ListOfOutputs<T> |
Wraps another Outputs implementation and encodes one or
more of its output values.
|
LiteralValueSource |
Pass a the field value through as a String, no matter the type // Q: doesn't this mean it's a "string"?
|
LithuanianAnalyzer |
|
LithuanianAnalyzer.DefaultSetHolder |
Atomically loads the DEFAULT_STOP_SET in a lazy fashion once the outer class
accesses the static final set the first time.;
|
LithuanianStemmer |
This class was automatically generated by a Snowball to Java compiler
It implements the stemming algorithm defined by a snowball script.
|
LiveDocsFormat |
Format for live/deleted documents
|
LiveFieldValues<S,T> |
Tracks live field values across NRT reader reopens.
|
LiveIndexWriterConfig |
Holds all the configuration used by IndexWriter with few setters for
settings that can be changed on an IndexWriter instance "live".
|
LMDirichletSimilarity |
Bayesian smoothing using Dirichlet priors.
|
LMJelinekMercerSimilarity |
Language model based on the Jelinek-Mercer smoothing method.
|
LMSimilarity |
Abstract superclass for language modeling Similarities.
|
LMSimilarity.CollectionModel |
A strategy for computing the collection language model.
|
LMSimilarity.DefaultCollectionModel |
Models p(w|C) as the number of occurrences of the term in the
collection, divided by the total number of tokens + 1 .
|
LMSimilarity.LMStats |
Stores the collection distribution of the current term.
|
Lock |
An interprocess mutex lock.
|
LockFactory |
Base class for Locking implementation.
|
LockObtainFailedException |
This exception is thrown when the write.lock
could not be acquired.
|
LockReleaseFailedException |
This exception is thrown when the write.lock
could not be released.
|
LockStressTest |
Simple standalone tool that forever acquires and releases a
lock using a specific LockFactory.
|
LockValidatingDirectoryWrapper |
This class makes a best-effort check that a provided Lock
is valid before any destructive filesystem operation.
|
LockVerifyServer |
|
LogByteSizeMergePolicy |
This is a LogMergePolicy that measures size of a
segment as the total byte size of the segment's files.
|
LogDocMergePolicy |
This is a LogMergePolicy that measures size of a
segment as the number of documents (not taking deletions
into account).
|
LogMergePolicy |
This class implements a MergePolicy that tries
to merge segments into levels of exponentially
increasing size, where each level has fewer segments than
the value of the merge factor.
|
LogMergePolicy.SegmentInfoAndLevel |
|
LongBitSet |
BitSet of fixed length (numBits), backed by accessible ( LongBitSet.getBits() )
long[], accessed with a long index.
|
LongDistanceFeatureQuery |
|
LongDocValues |
Abstract FunctionValues implementation which supports retrieving long values.
|
LongFieldSource |
|
LongHashSet |
|
LongPoint |
An indexed long field for fast range filters.
|
LongPointMultiRangeBuilder |
Builder for multi range queries for LongPoints
|
LongRange |
An indexed Long Range field.
|
LongRange |
Represents a contiguous range of long values, with an inclusive minimum and
exclusive maximum
|
LongRangeDocValuesField |
DocValues field for LongRange.
|
LongRangeFactory |
Groups double values into ranges
|
LongRangeGroupSelector |
A GroupSelector implementation that groups documents by long values
|
LongRangeSlowRangeQuery |
|
LongsRef |
Represents long[], as a slice (offset + length) into an
existing long[].
|
LongValues |
Per-segment, per-document long values, which can be calculated at search-time
|
LongValues |
Abstraction over an array of longs.
|
LongValuesSource |
|
LongValuesSource.ConstantLongValuesSource |
|
LongValuesSource.DoubleLongValuesSource |
|
LongValuesSource.FieldValuesSource |
|
LongValuesSource.LongValuesComparatorSource |
|
LongValuesSource.LongValuesHolder |
|
LongValuesSource.LongValuesSortField |
|
Lookup |
Simple Lookup interface for CharSequence suggestions.
|
Lookup.CharSequenceComparator |
|
Lookup.LookupPriorityQueue |
|
Lookup.LookupResult |
Result of a lookup.
|
LovinsStemmer |
This class was automatically generated by a Snowball to Java compiler
It implements the stemming algorithm defined by a snowball script.
|
LowercaseAsciiCompression |
Utility class that can efficiently compress arrays that mostly contain
characters in the [0x1F,0x3F) or [0x5F,0x7F) ranges, which notably
include all digits, lowercase characters, '.', '-' and '_'.
|
LowerCaseFilter |
Normalizes token text to lower case.
|
LowerCaseFilter |
Normalizes token text to lower case.
|
LowerCaseFilterFactory |
|
LRUQueryCache |
A QueryCache that evicts queries using a LRU (least-recently-used)
eviction policy in order to remain under a given maximum size and number of
bytes used.
|
LRUQueryCache.MinSegmentSizePredicate |
|
LSBRadixSorter |
A LSB Radix sorter for unsigned int values.
|
Lucene50CompoundFormat |
Lucene 5.0 compound file format
|
Lucene50CompoundReader |
Class for accessing a compound stream.
|
Lucene50CompoundReader.FileEntry |
Offset/Length for a slice inside of a compound file
|
Lucene50FieldInfosFormat |
Lucene 5.0 Field Infos format.
|
Lucene50LiveDocsFormat |
Lucene 5.0 live docs format
|
Lucene50PostingsFormat |
Lucene 5.0 postings format, which encodes postings in packed integer blocks
for fast decode.
|
Lucene50PostingsFormat.IntBlockTermState |
|
Lucene50PostingsReader |
Concrete class that reads docId(maybe frq,pos,offset,payloads) list
with postings format.
|
Lucene50ScoreSkipReader |
|
Lucene50ScoreSkipReader.MutableImpactList |
|
Lucene50SkipReader |
Implements the skip list reader for block postings format
that stores positions and payloads.
|
Lucene50StoredFieldsFormat |
Lucene 5.0 stored fields format.
|
Lucene50StoredFieldsFormat.Mode |
Configuration option for stored fields.
|
Lucene50TermVectorsFormat |
|
Lucene60FieldInfosFormat |
Lucene 6.0 Field Infos format.
|
Lucene60PointsFormat |
Lucene 6.0 point format, which encodes dimensional values in a block KD-tree structure
for fast 1D range and N dimensional shape intersection filtering.
|
Lucene60PointsReader |
Reads point values previously written with Lucene60PointsWriter
|
Lucene70Codec |
Implements the Lucene 7.0 index format, with configurable per-field postings
and docvalues formats.
|
Lucene70DocValuesConsumer |
|
Lucene70DocValuesConsumer.MinMaxTracker |
|
Lucene70DocValuesFormat |
Lucene 7.0 DocValues format.
|
Lucene70DocValuesProducer |
|
Lucene70DocValuesProducer.BaseSortedDocValues |
|
Lucene70DocValuesProducer.BaseSortedSetDocValues |
|
Lucene70DocValuesProducer.BinaryEntry |
|
Lucene70DocValuesProducer.DenseBinaryDocValues |
|
Lucene70DocValuesProducer.DenseNumericDocValues |
|
Lucene70DocValuesProducer.NumericEntry |
|
Lucene70DocValuesProducer.SortedEntry |
|
Lucene70DocValuesProducer.SortedNumericEntry |
|
Lucene70DocValuesProducer.SortedSetEntry |
|
Lucene70DocValuesProducer.SparseBinaryDocValues |
|
Lucene70DocValuesProducer.SparseNumericDocValues |
|
Lucene70DocValuesProducer.TermsDict |
|
Lucene70DocValuesProducer.TermsDictEntry |
|
Lucene70NormsConsumer |
|
Lucene70NormsFormat |
Lucene 7.0 Score normalization format.
|
Lucene70NormsProducer |
|
Lucene70NormsProducer.DenseNormsIterator |
|
Lucene70NormsProducer.NormsEntry |
|
Lucene70NormsProducer.SparseNormsIterator |
|
Lucene70SegmentInfoFormat |
Lucene 7.0 Segment info format.
|
Lucene80Codec |
Implements the Lucene 8.0 index format.
|
Lucene80DocValuesConsumer |
|
Lucene80DocValuesConsumer.MinMaxTracker |
|
Lucene80DocValuesFormat |
Lucene 8.0 DocValues format.
|
Lucene80DocValuesProducer |
|
Lucene80DocValuesProducer.BaseSortedDocValues |
|
Lucene80DocValuesProducer.BaseSortedSetDocValues |
|
Lucene80DocValuesProducer.BinaryEntry |
|
Lucene80DocValuesProducer.DenseBinaryDocValues |
|
Lucene80DocValuesProducer.DenseNumericDocValues |
|
Lucene80DocValuesProducer.NumericEntry |
|
Lucene80DocValuesProducer.SortedEntry |
|
Lucene80DocValuesProducer.SortedNumericEntry |
|
Lucene80DocValuesProducer.SortedSetEntry |
|
Lucene80DocValuesProducer.SparseBinaryDocValues |
|
Lucene80DocValuesProducer.SparseNumericDocValues |
|
Lucene80DocValuesProducer.TermsDict |
|
Lucene80DocValuesProducer.TermsDictEntry |
|
Lucene80NormsConsumer |
|
Lucene80NormsFormat |
Lucene 8.0 Score normalization format.
|
Lucene80NormsProducer |
|
Lucene80NormsProducer.DenseNormsIterator |
|
Lucene80NormsProducer.NormsEntry |
|
Lucene80NormsProducer.SparseNormsIterator |
|
Lucene84Codec |
Implements the Lucene 8.4 index format, with configurable per-field postings
and docvalues formats.
|
Lucene84PostingsFormat |
Lucene 5.0 postings format, which encodes postings in packed integer blocks
for fast decode.
|
Lucene84PostingsFormat.IntBlockTermState |
|
Lucene84PostingsReader |
Concrete class that reads docId(maybe frq,pos,offset,payloads) list
with postings format.
|
Lucene84PostingsWriter |
Concrete class that writes docId(maybe frq,pos,offset,payloads) list
with postings format.
|
Lucene84ScoreSkipReader |
|
Lucene84ScoreSkipReader.MutableImpactList |
|
Lucene84SkipReader |
Implements the skip list reader for block postings format
that stores positions and payloads.
|
Lucene84SkipWriter |
Write skip lists with multiple levels, and support skip within block ints.
|
Lucene86Codec |
Implements the Lucene 8.6 index format, with configurable per-field postings
and docvalues formats.
|
Lucene86PointsFormat |
Lucene 8.6 point format, which encodes dimensional values in a block KD-tree structure
for fast 1D range and N dimensional shape intersection filtering.
|
Lucene86PointsReader |
|
Lucene86PointsWriter |
Writes dimensional values
|
Lucene86SegmentInfoFormat |
Lucene 8.6 Segment info format.
|
LuceneDictionary |
Lucene Dictionary: terms taken from the given field
of a Lucene index.
|
LuceneLevenshteinDistance |
Damerau-Levenshtein (optimal string alignment) implemented in a consistent
way as Lucene's FuzzyTermsEnum with the transpositions option enabled.
|
LucenePackage |
Lucene's package information, including version.
|
LZ4 |
LZ4 compression and decompression routines.
|
LZ4.FastCompressionHashTable |
Simple lossy LZ4.HashTable that only stores the last ocurrence for
each hash on 2^14 bytes of memory.
|
LZ4.HashTable |
A record of previous occurrences of sequences of 4 bytes.
|
LZ4.HighCompressionHashTable |
|
MapOfSets<K,V> |
Helper class for keeping Lists of Objects associated with keys.
|
MappedMultiFields |
A Fields implementation that merges multiple
Fields into one, and maps around deleted documents.
|
MappedMultiFields.MappedMultiTerms |
|
MappedMultiFields.MappedMultiTermsEnum |
|
MappingCharFilter |
Simplistic CharFilter that applies the mappings
contained in a NormalizeCharMap to the character
stream, and correcting the resulting changes to the
offsets.
|
MappingCharFilterFactory |
|
MappingMultiPostingsEnum |
Exposes flex API, merged from flex API of sub-segments,
remapping docIDs (this is used for segment merging).
|
MappingMultiPostingsEnum.MappingPostingsSub |
|
MatchAllDocsQuery |
A query that matches all documents.
|
MatchAllDocsQueryBuilder |
|
MatchAllDocsQueryNode |
A MatchAllDocsQueryNode indicates that a query node tree or subtree
will match all documents if executed in the index.
|
MatchAllDocsQueryNodeBuilder |
|
MatchAllDocsQueryNodeProcessor |
|
MatcherFactory<T extends QueryMatch> |
Interface for the creation of new CandidateMatcher objects
|
Matches |
|
MatchesIterator |
An iterator over match positions (and optionally offsets) for a single document and field
To iterate over the matches, call MatchesIterator.next() until it returns false , retrieving
positions and/or offsets after each call.
|
MatchesUtils |
|
MatchingQueries<T extends QueryMatch> |
Class to hold the results of matching a single Document
against queries held in the Monitor
|
MatchingReaders |
Computes which segments have identical field name to number mappings,
which allows stored fields and term vectors in this codec to be bulk-merged.
|
MatchNoDocsQuery |
A query that matches no documents.
|
MatchNoDocsQueryNode |
A MatchNoDocsQueryNode indicates that a query node tree or subtree
will not match any documents if executed in the index.
|
MatchNoDocsQueryNodeBuilder |
|
MathUtil |
Math static utility methods.
|
MaxDocValueSource |
|
MaxFloatFunction |
MaxFloatFunction returns the max of its components.
|
MaxNonCompetitiveBoostAttribute |
|
MaxNonCompetitiveBoostAttributeImpl |
|
MaxPayloadFunction |
Returns the maximum payload score seen, else 1 if there are no payloads on the doc.
|
MaxScoreAccumulator |
Maintains the maximum score and its corresponding document id concurrently
|
MaxScoreAccumulator.DocAndScore |
|
MaxScoreCache |
Compute maximum scores based on Impacts and keep them in a cache in
order not to run expensive similarity score computations multiple times on
the same data.
|
MaxScoreSumPropagator |
Utility class to propagate scoring information in BooleanQuery , which
compute the score as the sum of the scores of its matching clauses.
|
MemoryAccountingBitsetCollector |
Bitset collector which supports memory tracking
|
MemoryIndex |
High-performance single-document main memory Apache Lucene fulltext search index.
|
MemoryIndex.BinaryDocValuesProducer |
|
MemoryIndex.MemoryDocValuesIterator |
|
MemoryIndex.NumericDocValuesProducer |
|
MemoryIndex.SliceByteStartArray |
|
MemoryIndexOffsetStrategy |
|
MemoryTracker |
Tracks dynamic allocations/deallocations of memory for transient objects
|
MergedIterator<T extends java.lang.Comparable<T>> |
Provides a merged sorted view from several sorted iterators.
|
MergedIterator.SubIterator<I extends java.lang.Comparable<I>> |
|
MergedIterator.TermMergeQueue<C extends java.lang.Comparable<C>> |
|
MergeInfo |
A MergeInfo provides information required for a MERGE context.
|
MergePolicy |
Expert: a MergePolicy determines the sequence of
primitive merge operations.
|
MergePolicy.MergeAbortedException |
|
MergePolicy.MergeContext |
This interface represents the current context of the merge selection process.
|
MergePolicy.MergeException |
Exception thrown if there are any problems while executing a merge.
|
MergePolicy.MergeReader |
|
MergePolicy.MergeSpecification |
A MergeSpecification instance provides the information
necessary to perform multiple merges.
|
MergePolicy.OneMerge |
OneMerge provides the information necessary to perform
an individual primitive merge operation, resulting in
a single new segment.
|
MergePolicy.OneMergeProgress |
Progress and state for an executing merge.
|
MergePolicy.OneMergeProgress.PauseReason |
Reason for pausing the merge thread.
|
MergeRateLimiter |
|
MergeReaderWrapper |
This is a hack to make index sorting fast, with a LeafReader that always returns merge instances when you ask for the codec readers.
|
MergeScheduler |
Expert: IndexWriter uses an instance
implementing this interface to execute the merges
selected by a MergePolicy .
|
MergeScheduler.MergeSource |
Provides access to new merges and executes the actual merge
|
MergeState |
Holds common state used during segment merging.
|
MergeState.DocMap |
A map of doc IDs.
|
MergeTrigger |
|
Message |
Message Interface for a lazy loading.
|
MessageImpl |
Default implementation of Message interface.
|
MinFloatFunction |
MinFloatFunction returns the min of its components.
|
MinHashFilter |
Generate min hash tokens from an incoming stream of tokens.
|
MinHashFilter.FixedSizeTreeSet<E extends java.lang.Comparable<E>> |
|
MinHashFilter.LongPair |
128 bits of state
|
MinHashFilterFactory |
|
MinimizationOperations |
Operations for minimizing automata.
|
MinimizationOperations.IntPair |
|
MinimizationOperations.StateList |
|
MinimizationOperations.StateListNode |
|
MinimizingConjunctionMatchesIterator |
|
MinimumShouldMatchIntervalsSource |
|
MinimumShouldMatchIntervalsSource.MinimumMatchesIterator |
|
MinimumShouldMatchIntervalsSource.MinimumShouldMatchIntervalIterator |
|
MinPayloadFunction |
Calculates the minimum payload seen
|
MinShouldMatchSumScorer |
|
MMapDirectory |
|
ModifierQueryNode |
A ModifierQueryNode indicates the modifier value (+,-,?,NONE) for
each term on the query string.
|
ModifierQueryNode.Modifier |
Modifier type: such as required (REQ), prohibited (NOT)
|
ModifierQueryNodeBuilder |
|
Monitor |
A Monitor contains a set of Query objects with associated IDs, and efficiently
matches them against sets of Document objects.
|
Monitor.QueryCacheStats |
Statistics for the query cache and query index
|
Monitor.StandardQueryCollector<T extends QueryMatch> |
|
MonitorConfiguration |
Encapsulates various configuration settings for a Monitor's query index
|
MonitorQuery |
Defines a query to be stored in a Monitor
|
MonitorQuerySerializer |
Serializes and deserializes MonitorQuery objects into byte streams
Use this for persistent query indexes
|
MonitorUpdateListener |
For reporting events on a Monitor's query index
|
MonotonicBlockPackedReader |
|
MonotonicBlockPackedWriter |
A writer for large monotonically increasing sequences of positive longs.
|
MonotonicLongValues |
|
MonotonicLongValues.Builder |
|
MoreLikeThis |
Generate "more like this" similarity queries.
|
MoreLikeThis.FreqQ |
PriorityQueue that orders words by score.
|
MoreLikeThis.Int |
Use for frequencies and to avoid renewing Integers.
|
MoreLikeThis.ScoreTerm |
|
MoreLikeThisQuery |
A simple wrapper for MoreLikeThis for use in scenarios where a Query object is required eg
in custom QueryParser extensions.
|
MSBRadixSorter |
Radix sorter for variable-length strings.
|
MultiBits |
Concatenates multiple Bits together, on every lookup.
|
MultiBoolFunction |
Abstract ValueSource implementation which wraps multiple ValueSources
and applies an extendible boolean function to their values.
|
MultiCollector |
|
MultiCollector.MinCompetitiveScoreAwareScorable |
|
MultiCollector.MultiLeafCollector |
|
MultiCollectorManager |
|
MultiDocValues |
A wrapper for CompositeIndexReader providing access to DocValues.
|
MultiDocValues.MultiSortedDocValues |
Implements SortedDocValues over n subs, using an OrdinalMap
|
MultiDocValues.MultiSortedSetDocValues |
Implements MultiSortedSetDocValues over n subs, using an OrdinalMap
|
MultiFieldQueryNodeProcessor |
This processor is used to expand terms so the query looks for the same term
in different fields.
|
MultiFieldQueryParser |
A QueryParser which constructs queries to search multiple fields.
|
MultiFields |
|
MultiFloatFunction |
Abstract ValueSource implementation which wraps multiple ValueSources
and applies an extendible float function to their values.
|
MultiFunction |
Abstract parent class for ValueSource implementations that wrap multiple
ValueSources and apply their own logic.
|
MultiLeafFieldComparator |
|
MultiLeafReader |
|
MultiLevelSkipListReader |
This abstract class reads skip lists with multiple levels.
|
MultiLevelSkipListReader.SkipBuffer |
used to buffer the top skip levels
|
MultiLevelSkipListWriter |
This abstract class writes skip lists with multiple levels.
|
MultiMatchingQueries<T extends QueryMatch> |
Class to hold the results of matching a batch of Document s
against queries held in the Monitor
|
MultiNormsLeafSimScorer |
Copy of LeafSimScorer that sums document's norms from multiple fields.
|
MultiNormsLeafSimScorer.MultiFieldNormValues |
|
MultiPassIndexSplitter |
This tool splits input index into multiple equal parts.
|
MultiPassIndexSplitter.FakeDeleteIndexReader |
This class emulates deletions on the underlying index.
|
MultiPassIndexSplitter.FakeDeleteLeafIndexReader |
|
MultipassTermFilteredPresearcher |
A TermFilteredPresearcher that indexes queries multiple times, with terms collected
from different routes through a querytree.
|
MultiPhraseQuery |
A generalized version of PhraseQuery , with the possibility of
adding more than one term at the same position that are treated as a disjunction (OR).
|
MultiPhraseQuery.Builder |
A builder for multi-phrase queries
|
MultiPhraseQuery.PostingsAndPosition |
|
MultiPhraseQuery.UnionFullPostingsEnum |
|
MultiPhraseQuery.UnionPostingsEnum |
Takes the logical union of multiple PostingsEnum iterators.
|
MultiPhraseQuery.UnionPostingsEnum.DocsQueue |
disjunction of postings ordered by docid.
|
MultiPhraseQuery.UnionPostingsEnum.PositionsQueue |
queue of terms for a single document.
|
MultiPhraseQueryNode |
|
MultiPhraseQueryNodeBuilder |
|
MultiPostingsEnum |
|
MultiPostingsEnum.EnumWithSlice |
|
MultiRangeQuery |
Abstract class for range queries involving multiple ranges against physical points such as IntPoints
All ranges are logically ORed together
TODO: Add capability for handling overlapping ranges at rewrite time
|
MultiRangeQuery.Builder |
A builder for multirange queries.
|
MultiRangeQuery.RangeClause |
Representation of a single clause in a MultiRangeQuery
|
MultiReader |
|
Multiset<T> |
A Multiset is a set that allows for duplicate elements.
|
MultiSimilarity |
Implements the CombSUM method for combining evidence from multiple
similarity values described in: Joseph A.
|
MultiSimilarity.MultiSimScorer |
|
MultiSorter |
|
MultiSorter.LeafAndDocID |
|
MultiTermHighlighting |
Support for highlighting multi-term queries.
|
MultiTermHighlighting.AutomataCollector |
|
MultiTermIntervalsSource |
|
MultiTermQuery |
An abstract Query that matches documents
containing a subset of terms provided by a FilteredTermsEnum enumeration.
|
MultiTermQuery.RewriteMethod |
Abstract class that defines how the query is rewritten.
|
MultiTermQuery.TopTermsBlendedFreqScoringRewrite |
A rewrite method that first translates each term into
BooleanClause.Occur.SHOULD clause in a BooleanQuery, but adjusts
the frequencies used for scoring to be blended across the terms, otherwise
the rarest term typically ranks highest (often not useful eg in the set of
expanded terms in a FuzzyQuery).
|
MultiTermQuery.TopTermsBoostOnlyBooleanQueryRewrite |
A rewrite method that first translates each term into
BooleanClause.Occur.SHOULD clause in a BooleanQuery, but the scores
are only computed as the boost.
|
MultiTermQuery.TopTermsScoringBooleanQueryRewrite |
A rewrite method that first translates each term into
BooleanClause.Occur.SHOULD clause in a BooleanQuery, and keeps the
scores as computed by the query.
|
MultiTermQueryConstantScoreWrapper<Q extends MultiTermQuery> |
|
MultiTermQueryConstantScoreWrapper.TermAndState |
|
MultiTermQueryConstantScoreWrapper.WeightOrDocIdSet |
|
MultiTermRewriteMethodProcessor |
|
MultiTerms |
Exposes flex API, merged from flex API of
sub-segments.
|
MultiTermsEnum |
|
MultiTermsEnum.TermMergeQueue |
|
MultiTermsEnum.TermsEnumIndex |
|
MultiTermsEnum.TermsEnumWithSlice |
|
MultiTrie |
The MultiTrie is a Trie of Tries.
|
MultiTrie2 |
The MultiTrie is a Trie of Tries.
|
MultiValuedDoubleFieldSource |
|
MultiValuedFloatFieldSource |
|
MultiValuedIntFieldSource |
|
MultiValuedLongFieldSource |
|
MultiValueSource |
|
MurmurHash2 |
This is a very fast, non-cryptographic hash suitable for general hash-based
lookup.
|
MutablePointsReaderUtils |
Utility APIs for sorting and partitioning buffered points.
|
MutablePointValues |
|
MutableValue |
Base class for all mutable values.
|
MutableValueBool |
|
MutableValueDate |
|
MutableValueDouble |
|
MutableValueFloat |
|
MutableValueInt |
|
MutableValueLong |
|
MutableValueStr |
|
NamedMatches |
Utility class to help extract the set of sub queries that have matched from
a larger query.
|
NamedMatches.NamedQuery |
|
NamedSPILoader<S extends NamedSPILoader.NamedSPI> |
Helper class for loading named SPIs from classpath (e.g.
|
NamedSPILoader.NamedSPI |
|
NamedThreadFactory |
A default ThreadFactory implementation that accepts the name prefix
of the created threads as a constructor argument.
|
NativeFSLockFactory |
|
NativeFSLockFactory.NativeFSLock |
|
NativePosixUtil |
|
NativeUnixDirectory |
A Directory implementation for all Unixes that uses
DIRECT I/O to bypass OS level IO caching during
merging.
|
NativeUnixDirectory.NativeUnixIndexInput |
|
NativeUnixDirectory.NativeUnixIndexOutput |
|
NearestFuzzyQuery |
Simplification of FuzzyLikeThisQuery, to be used in the context of KNN classification.
|
NearestFuzzyQuery.FieldVals |
|
NearestFuzzyQuery.ScoreTerm |
|
NearestFuzzyQuery.ScoreTermQueue |
|
NearestNeighbor |
KNN search on top of 2D lat/lon indexed points.
|
NearestNeighbor.Cell |
|
NearestNeighbor.NearestHit |
|
NearestNeighbor.NearestVisitor |
|
NearSpansOrdered |
A Spans that is formed from the ordered subspans of a SpanNearQuery
where the subspans do not overlap and have a maximum slop between them.
|
NearSpansUnordered |
|
NGramDistance |
N-Gram version of edit distance based on paper by Grzegorz Kondrak,
"N-gram similarity and distance".
|
NGramFilterFactory |
|
NGramPhraseQuery |
This is a PhraseQuery which is optimized for n-gram phrase query.
|
NGramTokenFilter |
Tokenizes the input into n-grams of the given size(s).
|
NGramTokenizer |
Tokenizes the input into n-grams of the given size(s).
|
NGramTokenizerFactory |
|
NIOFSDirectory |
An FSDirectory implementation that uses java.nio's FileChannel's
positional read, which allows multiple threads to read from the same file
without synchronizing.
|
NIOFSDirectory.NIOFSIndexInput |
Reads bytes with FileChannel.read(ByteBuffer, long)
|
NLS |
MessageBundles classes extend this class, to implement a bundle.
|
NLSException |
Interface that exceptions should implement to support lazy loading of messages.
|
NoChildOptimizationQueryNodeProcessor |
|
NodeHash<T> |
|
NoDeletionPolicy |
|
NoLockFactory |
|
NoLockFactory.NoLock |
|
NoMergePolicy |
|
NoMergeScheduler |
|
NonOverlappingIntervalsSource |
|
NonOverlappingIntervalsSource.NonOverlappingIterator |
|
NoOpOffsetStrategy |
Never returns offsets.
|
NoOutputs |
A null FST Outputs implementation; use this if
you just want to build an FSA.
|
Normalization |
This class acts as the base class for the implementations of the term
frequency normalization methods in the DFR framework.
|
Normalization.NoNormalization |
Implementation used when there is no normalization.
|
NormalizationH1 |
Normalization model that assumes a uniform distribution of the term frequency.
|
NormalizationH2 |
Normalization model in which the term frequency is inversely related to the
length.
|
NormalizationH3 |
Dirichlet Priors normalization
|
NormalizationZ |
Pareto-Zipf Normalization
|
NormalizeCharMap |
|
NormalizeCharMap.Builder |
Builds an NormalizeCharMap.
|
NormsConsumer |
Abstract API that consumes normalization values.
|
NormsConsumer.NumericDocValuesSub |
Tracks state of one numeric sub-reader that we are merging
|
NormsFieldExistsQuery |
A Query that matches documents that have a value for a given field
as reported by field norms.
|
NormsFormat |
Encodes/decodes per-document score normalization values.
|
NormsProducer |
Abstract API that produces field normalization values
|
NormValueSource |
Function that returns the decoded norm for every document.
|
NormValuesWriter |
Buffers up pending long per doc, then flushes when
segment flushes.
|
NormValuesWriter.BufferedNorms |
|
NorwegianAnalyzer |
|
NorwegianAnalyzer.DefaultSetHolder |
Atomically loads the DEFAULT_STOP_SET in a lazy fashion once the outer class
accesses the static final set the first time.;
|
NorwegianLightStemFilter |
|
NorwegianLightStemFilterFactory |
|
NorwegianLightStemmer |
Light Stemmer for Norwegian.
|
NorwegianMinimalStemFilter |
|
NorwegianMinimalStemFilterFactory |
|
NorwegianMinimalStemmer |
Minimal Stemmer for Norwegian Bokmål (no-nb) and Nynorsk (no-nn)
|
NorwegianStemmer |
This class was automatically generated by a Snowball to Java compiler
It implements the stemming algorithm defined by a snowball script.
|
NotContainedByIntervalsSource |
|
NotContainedByIntervalsSource.NotContainedByIterator |
|
NotContainingIntervalsSource |
|
NotContainingIntervalsSource.NotContainingIterator |
|
NotDocIdSet |
|
NoTokenFoundQueryNode |
A NoTokenFoundQueryNode is used if a term is convert into no tokens
by the tokenizer/lemmatizer/analyzer (null).
|
NotQuery |
Factory for prohibited clauses
|
NRTCachingDirectory |
Wraps a RAMDirectory
around any provided delegate directory, to
be used during NRT search.
|
NRTSuggester |
NRTSuggester executes Top N search on a weighted FST specified by a CompletionScorer
|
NRTSuggester.PayLoadProcessor |
Helper to encode/decode payload (surface + PAYLOAD_SEP + docID) output
|
NRTSuggester.ScoringPathComparator |
|
NRTSuggesterBuilder |
|
NRTSuggesterBuilder.Entry |
|
NullFragmenter |
Fragmenter implementation which does not fragment the text.
|
NumberDateFormat |
This Format parses Long into date strings and vice-versa.
|
NumDocsValueSource |
|
NumericDocValues |
A per-document numeric value.
|
NumericDocValuesField |
Field that stores a per-document long value for scoring,
sorting or value retrieval.
|
NumericDocValuesFieldUpdates |
|
NumericDocValuesFieldUpdates.Iterator |
|
NumericDocValuesFieldUpdates.SingleValueNumericDocValuesFieldUpdates |
|
NumericDocValuesWriter |
Buffers up pending long per doc, then flushes when
segment flushes.
|
NumericDocValuesWriter.BufferedNumericDocValues |
|
NumericPayloadTokenFilter |
|
NumericPayloadTokenFilterFactory |
|
NumericUtils |
Helper APIs to encode numeric values as sortable bytes and vice-versa.
|
OffHeapFSTStore |
Provides off heap storage of finite state machine (FST),
using underlying index input instead of byte store on heap
|
OfflinePointReader |
Reads points from disk in a fixed-with format, previously written with OfflinePointWriter .
|
OfflinePointReader.OfflinePointValue |
Reusable implementation for a point value offline
|
OfflinePointWriter |
Writes points to disk in a fixed-with format.
|
OfflineSorter |
On-disk sorting of byte arrays.
|
OfflineSorter.BufferSize |
A bit more descriptive unit for constructors.
|
OfflineSorter.ByteSequencesReader |
Utility class to read length-prefixed byte[] entries from an input.
|
OfflineSorter.ByteSequencesWriter |
Utility class to emit length-prefixed byte[] entries to an output stream for sorting.
|
OfflineSorter.FileAndTop |
|
OfflineSorter.Partition |
Holds one partition of items, either loaded into memory or based on a file.
|
OffsetAttribute |
The start and end character offset of a Token.
|
OffsetAttributeImpl |
|
OffsetIntervalsSource |
Tracks a reference intervals source, and produces a pseudo-interval that appears
either one position before or one position after each interval from the reference
|
OffsetIntervalsSource.OffsetIntervalIterator |
|
OffsetLimitTokenFilter |
This TokenFilter limits the number of tokens while indexing by adding up the
current offset.
|
OffsetsEnum |
|
OffsetsEnum.MultiOffsetsEnum |
A view over several OffsetsEnum instances, merging them in-place
|
OffsetsEnum.OfMatchesIterator |
|
OffsetsEnum.OfMatchesIteratorWithSubs |
|
OffsetsEnum.OfMatchesIteratorWithSubs.CachedOE |
|
OffsetsEnum.OfPostings |
|
OneMergeWrappingMergePolicy |
A wrapping merge policy that wraps the MergePolicy.OneMerge
objects returned by the wrapped merge policy.
|
OnHeapFSTStore |
Provides storage of finite state machine (FST),
using byte array or byte store allocated on heap.
|
OpaqueQueryNode |
A OpaqueQueryNode is used for specify values that are not supposed to
be parsed by the parser.
|
OpenRangeQueryNodeProcessor |
|
OpenStringBuilder |
A StringBuilder that allows one to access the array.
|
Operations |
Automata operations.
|
Operations.PointTransitions |
|
Operations.PointTransitionSet |
|
Operations.TransitionList |
|
Optimizer |
The Optimizer class is a Trie that will be reduced (have empty rows removed).
|
Optimizer2 |
The Optimizer class is a Trie that will be reduced (have empty rows removed).
|
OrderedIntervalsSource |
|
OrderedIntervalsSource.OrderedIntervalIterator |
|
OrdinalMap |
Maps per-segment ordinals to/from global ordinal space, using a compact packed-ints representation.
|
OrdinalMap.SegmentMap |
|
OrdinalMap.TermsEnumIndex |
|
OrdsBlockTreeTermsReader |
|
OrdsBlockTreeTermsWriter |
This is just like BlockTreeTermsWriter , except it also stores a version per term, and adds a method to its TermsEnum
implementation to seekExact only if the version is >= the specified version.
|
OrdsBlockTreeTermsWriter.FieldMetaData |
|
OrdsBlockTreeTermsWriter.PendingBlock |
|
OrdsBlockTreeTermsWriter.PendingEntry |
|
OrdsBlockTreeTermsWriter.PendingTerm |
|
OrdsBlockTreeTermsWriter.SubIndex |
|
OrdsFieldReader |
BlockTree's implementation of Terms .
|
OrdsIntersectTermsEnum |
|
OrdsIntersectTermsEnumFrame |
|
OrdsSegmentTermsEnum |
Iterates through terms in this field.
|
OrdsSegmentTermsEnum.InputOutput |
Holds a single input (IntsRef) + output pair.
|
OrdsSegmentTermsEnumFrame |
|
OrdTermState |
|
OrQuery |
Factory for disjunctions
|
OrQueryNode |
A OrQueryNode represents an OR boolean operation performed on a list
of nodes.
|
Outputs<T> |
Represents the outputs for an FST, providing the basic
algebra required for building and traversing the FST.
|
OutputStreamDataOutput |
|
OutputStreamIndexOutput |
Implementation class for buffered IndexOutput that writes to an OutputStream .
|
OverlappingIntervalsSource |
|
OverlaySingleDocTermsLeafReader |
Overlays a 2nd LeafReader for the terms of one field, otherwise the primary reader is
consulted.
|
Packed16ThreeBlocks |
Packs integers into 3 shorts (48 bits per value).
|
Packed64 |
Space optimized random access capable array of values with a fixed number of
bits/value.
|
Packed64SingleBlock |
This class is similar to Packed64 except that it trades space for
speed by ensuring that a single block needs to be read/written in order to
read/write a value.
|
Packed64SingleBlock.Packed64SingleBlock1 |
|
Packed64SingleBlock.Packed64SingleBlock10 |
|
Packed64SingleBlock.Packed64SingleBlock12 |
|
Packed64SingleBlock.Packed64SingleBlock16 |
|
Packed64SingleBlock.Packed64SingleBlock2 |
|
Packed64SingleBlock.Packed64SingleBlock21 |
|
Packed64SingleBlock.Packed64SingleBlock3 |
|
Packed64SingleBlock.Packed64SingleBlock32 |
|
Packed64SingleBlock.Packed64SingleBlock4 |
|
Packed64SingleBlock.Packed64SingleBlock5 |
|
Packed64SingleBlock.Packed64SingleBlock6 |
|
Packed64SingleBlock.Packed64SingleBlock7 |
|
Packed64SingleBlock.Packed64SingleBlock8 |
|
Packed64SingleBlock.Packed64SingleBlock9 |
|
Packed8ThreeBlocks |
Packs integers into 3 bytes (24 bits per value).
|
PackedDataInput |
A DataInput wrapper to read unaligned, variable-length packed
integers.
|
PackedDataOutput |
A DataOutput wrapper to write unaligned, variable-length packed
integers.
|
PackedInts |
Simplistic compression for array of unsigned long values.
|
PackedInts.Decoder |
A decoder for packed integers.
|
PackedInts.Encoder |
An encoder for packed integers.
|
PackedInts.Format |
A format to write packed ints.
|
PackedInts.FormatAndBits |
Simple class that holds a format and a number of bits per value.
|
PackedInts.Mutable |
A packed integer array that can be modified.
|
PackedInts.MutableImpl |
|
PackedInts.NullReader |
|
PackedInts.Reader |
A read-only random access array of positive integers.
|
PackedInts.ReaderImpl |
A simple base for Readers that keeps track of valueCount and bitsPerValue.
|
PackedInts.ReaderIterator |
Run-once iterator interface, to decode previously saved PackedInts.
|
PackedInts.ReaderIteratorImpl |
|
PackedInts.Writer |
A write-once Writer.
|
PackedLongValues |
Utility class to compress integers into a LongValues instance.
|
PackedLongValues.Builder |
|
PackedReaderIterator |
|
PackedTokenAttributeImpl |
|
PackedWriter |
|
PagedBytes |
Represents a logical byte[] as a series of pages.
|
PagedBytes.Reader |
Provides methods to read BytesRefs from a frozen
PagedBytes.
|
PagedGrowableWriter |
|
PagedMutable |
|
PairOutputs<A,B> |
An FST Outputs implementation, holding two other outputs.
|
PairOutputs.Pair<A,B> |
Holds a single pair of two outputs.
|
ParallelCompositeReader |
|
ParallelLeafReader |
An LeafReader which reads multiple, parallel indexes.
|
ParallelLeafReader.ParallelFields |
|
ParallelMatcher<T extends QueryMatch> |
Matcher class that runs matching queries in parallel.
|
ParallelMatcher.MatcherTask |
|
ParallelMatcher.ParallelMatcherFactory<T extends QueryMatch> |
|
ParallelPostingsArray |
|
ParentChildrenBlockJoinQuery |
A query that returns all the matching child documents for a specific parent document
indexed together in the same block.
|
ParseException |
This exception is thrown when parse errors are encountered.
|
ParseException |
This exception is thrown when parse errors are encountered.
|
ParseException |
This exception is thrown when parse errors are encountered.
|
ParserException |
Thrown when the xml queryparser encounters
invalid syntax/configuration.
|
ParserExtension |
This class represents an extension base class to the Lucene standard
QueryParser .
|
PartitionMatcher<T extends QueryMatch> |
A multi-threaded matcher that collects all possible matches in one pass, and
then partitions them amongst a number of worker threads to perform the actual
matching.
|
PartitionMatcher.MatchTask |
|
PartitionMatcher.PartitionMatcherFactory<T extends QueryMatch> |
|
PartOfSpeechAttribute |
|
PartOfSpeechAttribute |
Part of Speech attributes for Korean.
|
PartOfSpeechAttributeImpl |
|
PartOfSpeechAttributeImpl |
Part of Speech attributes for Korean.
|
Passage |
Represents a passage (typically a sentence of the document).
|
PassageFormatter |
Creates a formatted snippet from the top passages.
|
PassageScorer |
|
PathHierarchyTokenizer |
Tokenizer for path-like hierarchies.
|
PathHierarchyTokenizerFactory |
|
PathNode |
SmartChineseAnalyzer internal node representation
|
PathQueryNode |
A PathQueryNode is used to store queries like
/company/USA/California /product/shoes/brown.
|
PathQueryNode.QueryText |
Term text with a beginning and end position
|
PatternCaptureGroupFilterFactory |
|
PatternCaptureGroupTokenFilter |
CaptureGroup uses Java regexes to emit multiple tokens - one for each capture
group in one or more patterns.
|
PatternConsumer |
This interface is used to connect the XML pattern file parser to the
hyphenation tree.
|
PatternKeywordMarkerFilter |
|
PatternParser |
A SAX document handler to read and parse hyphenation patterns from a XML
file.
|
PatternReplaceCharFilter |
CharFilter that uses a regular expression for the target of replace string.
|
PatternReplaceCharFilterFactory |
|
PatternReplaceFilter |
A TokenFilter which applies a Pattern to each token in the stream,
replacing match occurrences with the specified replacement string.
|
PatternReplaceFilterFactory |
|
PatternTokenizer |
This tokenizer uses regex pattern matching to construct distinct tokens
for the input stream.
|
PatternTokenizerFactory |
|
PayloadAttribute |
The payload of a Token.
|
PayloadAttributeImpl |
|
PayloadDecoder |
|
PayloadEncoder |
Mainly for use with the DelimitedPayloadTokenFilter, converts char buffers to
BytesRef .
|
PayloadFilteredTermIntervalsSource |
|
PayloadFunction |
An abstract class that defines a way for PayloadScoreQuery instances to transform
the cumulative effects of payload scores for a document.
|
PayloadHelper |
Utility methods for encoding payloads.
|
PayloadScoreQuery |
A Query class that uses a PayloadFunction to modify the score of a wrapped SpanQuery
|
PayloadSpanCollector |
SpanCollector for collecting payloads
|
PayloadSpanUtil |
Experimental class to get set of payloads for most standard Lucene queries.
|
PendingDeletes |
This class handles accounting and applying pending deletes for live segment readers
|
PendingSoftDeletes |
|
PerFieldAnalyzerWrapper |
This analyzer is used to facilitate scenarios where different
fields require different analysis techniques.
|
PerFieldDocValuesFormat |
Enables per field docvalues support.
|
PerFieldDocValuesFormat.ConsumerAndSuffix |
|
PerFieldMergeState |
Utility class to update the MergeState instance to be restricted to a set of fields.
|
PerFieldMergeState.FilterFieldInfos |
|
PerFieldMergeState.FilterFieldsProducer |
|
PerFieldPostingsFormat |
Enables per field postings support.
|
PerFieldPostingsFormat.FieldsGroup |
Group of fields written by one PostingsFormat
|
PerFieldPostingsFormat.FieldsGroup.Builder |
|
PerFieldPostingsFormat.FieldsReader |
|
PerFieldSimilarityWrapper |
Provides the ability to use a different Similarity for different fields.
|
PersianAnalyzer |
|
PersianAnalyzer.DefaultSetHolder |
Atomically loads the DEFAULT_STOP_SET in a lazy fashion once the outer class
accesses the static final set the first time.;
|
PersianCharFilter |
CharFilter that replaces instances of Zero-width non-joiner with an
ordinary space.
|
PersianCharFilterFactory |
|
PersianNormalizationFilter |
|
PersianNormalizationFilterFactory |
|
PersianNormalizer |
Normalizer for Persian.
|
PersistentSnapshotDeletionPolicy |
A SnapshotDeletionPolicy which adds a persistence layer so that
snapshots can be maintained across the life of an application.
|
PForUtil |
Utility class to encode sequences of 128 small positive integers.
|
PhoneticFilter |
Create tokens for phonetic matches.
|
PhoneticFilterFactory |
|
PhraseHelper |
|
PhraseHelper.SingleFieldWithOffsetsFilterLeafReader |
Needed to support the ability to highlight a query irrespective of the field a query refers to
(aka requireFieldMatch=false).
|
PhraseHelper.SpanCollectedOffsetsEnum |
|
PhraseMatcher |
|
PhrasePositions |
Position of a term in a document that takes into account the term offset within the phrase.
|
PhraseQuery |
A Query that matches documents containing a particular sequence of terms.
|
PhraseQuery.Builder |
A builder for phrase queries.
|
PhraseQuery.PostingsAndFreq |
|
PhraseQueryNodeBuilder |
|
PhraseQueue |
|
PhraseScorer |
|
PhraseSlopQueryNode |
|
PhraseSlopQueryNodeProcessor |
This processor removes invalid SlopQueryNode objects in the query
node tree.
|
PhraseWeight |
|
PhraseWildcardQuery |
A generalized version of PhraseQuery , built with one or more MultiTermQuery
that provides term expansions for multi-terms (one of the expanded terms must match).
|
PhraseWildcardQuery.Builder |
|
PhraseWildcardQuery.MultiTerm |
Phrase term with expansions.
|
PhraseWildcardQuery.PhraseTerm |
|
PhraseWildcardQuery.SingleTerm |
Phrase term with no expansion.
|
PhraseWildcardQuery.TermBytesTermState |
Holds a pair of term bytes - term state.
|
PhraseWildcardQuery.TermData |
Holds the TermState for all the collected Term ,
for a specific phrase term, for all segments.
|
PhraseWildcardQuery.TermsData |
|
PhraseWildcardQuery.TermStats |
Accumulates the doc freq and total term freq.
|
PhraseWildcardQuery.TestCounters |
Test counters incremented when assertions are enabled.
|
PKIndexSplitter |
Split an index based on a Query .
|
PKIndexSplitter.DocumentFilteredLeafIndexReader |
|
Placeholder |
Remove this file when adding back compat codecs
|
PlainTextDictionary |
Dictionary represented by a text file.
|
Point |
Represents a point on the earth's surface.
|
Point2D |
2D point implementation containing geo spatial logic.
|
PointInSetIncludingScoreQuery |
|
PointInSetIncludingScoreQuery.Stream |
|
PointInSetQuery |
Abstract query class to find all documents whose single or multi-dimensional point values, previously indexed with e.g.
|
PointInSetQuery.Stream |
Iterator of encoded point values.
|
PointQueryNode |
This query node represents a field query that holds a point value.
|
PointQueryNodeProcessor |
|
PointRangeQuery |
Abstract class for range queries against single or multidimensional points such as
IntPoint .
|
PointRangeQueryBuilder |
|
PointRangeQueryNode |
This query node represents a range query composed by PointQueryNode
bounds, which means the bound values are Number s.
|
PointRangeQueryNodeBuilder |
|
PointRangeQueryNodeProcessor |
|
PointReader |
One pass iterator through all points previously written with a
PointWriter , abstracting away whether points are read
from (offline) disk or simple arrays in heap.
|
PointsConfig |
This class holds the configuration used to parse numeric queries and create
PointValues queries.
|
PointsConfigListener |
|
PointsFormat |
Encodes/decodes indexed points.
|
PointsReader |
Abstract API to visit point values.
|
PointsWriter |
Abstract API to write points
|
PointValue |
Represents a dimensional point value written in the BKD tree.
|
PointValues |
Access to indexed numeric values.
|
PointValues.IntersectVisitor |
We recurse the BKD tree, using a provided instance of this to guide the recursion.
|
PointValues.Relation |
|
PointValuesWriter |
Buffers up pending byte[][] value(s) per doc, then flushes when segment flushes.
|
PointValuesWriter.MutableSortingPointValues |
|
PointWriter |
Appends many points, and then at the end provides a PointReader to iterate
those points.
|
PolishAnalyzer |
|
PolishAnalyzer.DefaultsHolder |
Atomically loads the DEFAULT_STOP_SET in a lazy fashion once the outer class
accesses the static final set the first time.;
|
Polygon |
Represents a closed polygon on the earth's surface.
|
Polygon2D |
2D polygon implementation represented as a balanced interval tree of edges.
|
PorterStemFilter |
Transforms the token stream as per the Porter stemming algorithm.
|
PorterStemFilterFactory |
|
PorterStemmer |
Stemmer, implementing the Porter Stemming Algorithm
The Stemmer class transforms a word into its root form.
|
PorterStemmer |
This class was automatically generated by a Snowball to Java compiler
It implements the stemming algorithm defined by a snowball script.
|
PortugueseAnalyzer |
|
PortugueseAnalyzer.DefaultSetHolder |
Atomically loads the DEFAULT_STOP_SET in a lazy fashion once the outer class
accesses the static final set the first time.;
|
PortugueseLightStemFilter |
|
PortugueseLightStemFilterFactory |
|
PortugueseLightStemmer |
Light Stemmer for Portuguese
|
PortugueseMinimalStemFilter |
|
PortugueseMinimalStemFilterFactory |
|
PortugueseMinimalStemmer |
Minimal Stemmer for Portuguese
|
PortugueseStemFilter |
|
PortugueseStemFilterFactory |
|
PortugueseStemmer |
Portuguese stemmer implementing the RSLP (Removedor de Sufixos da Lingua Portuguesa)
algorithm.
|
PortugueseStemmer |
This class was automatically generated by a Snowball to Java compiler
It implements the stemming algorithm defined by a snowball script.
|
POS |
Part of speech classification for Korean based on Sejong corpus classification.
|
POS.Tag |
Part of speech tag for Korean based on Sejong corpus classification.
|
POS.Type |
The type of the token.
|
PositionIncrementAttribute |
Determines the position of this token
relative to the previous Token in a TokenStream, used in phrase
searching.
|
PositionIncrementAttributeImpl |
|
PositionLengthAttribute |
Determines how many positions this
token spans.
|
PositionLengthAttributeImpl |
|
PositionSpan |
Utility class to record Positions Spans
|
PositiveIntOutputs |
An FST Outputs implementation where each output
is a non-negative long value.
|
PositiveScoresOnlyCollector |
A Collector implementation which wraps another
Collector and makes sure only documents with
scores > 0 are collected.
|
PostingsEnum |
Iterates through the postings.
|
PostingsFormat |
Encodes/decodes terms, postings, and proximity data.
|
PostingsFormat.Holder |
This static holder class prevents classloading deadlock by delaying
init of postings formats until needed.
|
PostingsOffsetStrategy |
|
PostingsReaderBase |
The core terms dictionaries (BlockTermsReader,
BlockTreeTermsReader) interact with a single instance
of this class to manage creation of PostingsEnum and
PostingsEnum instances.
|
PostingsWithTermVectorsOffsetStrategy |
|
PostingsWriterBase |
Class that plugs into term dictionaries, such as BlockTreeTermsWriter , and handles writing postings.
|
PowFloatFunction |
Function to raise the base "a" to the power "b"
|
PrecedenceQueryNodeProcessorPipeline |
|
PrecedenceQueryParser |
This query parser works exactly as the standard query parser ( StandardQueryParser ),
except that it respect the boolean precedence, so <a AND b OR c AND d> is parsed to <(+a +b) (+c +d)>
instead of <+a +b +c +d>.
|
PrefixCodedTerms |
Prefix codes term instances (prefixes are shared).
|
PrefixCodedTerms.Builder |
Builds a PrefixCodedTerms: call add repeatedly, then finish.
|
PrefixCodedTerms.TermIterator |
|
PrefixCompletionQuery |
|
PrefixQuery |
A Query that matches documents containing terms with a specified prefix.
|
PrefixWildcardQueryNode |
|
PrefixWildcardQueryNodeBuilder |
|
Presearcher |
A Presearcher is used by the Monitor to reduce the number of queries actually
run against a Document.
|
PresearcherMatch<T extends QueryMatch> |
Wraps a QueryMatch with information about which queries were selected by the presearcher
|
PresearcherMatches<T extends QueryMatch> |
|
PrintStreamInfoStream |
InfoStream implementation over a PrintStream
such as System.out .
|
PriorityQueue<T> |
A PriorityQueue maintains a partial ordering of its elements such that the
least element can always be found in constant time.
|
ProductFloatFunction |
ProductFloatFunction returns the product of its components.
|
ProtectedTermFilter |
A ConditionalTokenFilter that only applies its wrapped filters to tokens that
are not contained in a protected set.
|
ProtectedTermFilterFactory |
|
ProximityQueryNode |
A ProximityQueryNode represents a query where the terms should meet
specific distance conditions.
|
ProximityQueryNode.ProximityType |
utility class containing the distance condition and number
|
ProximityQueryNode.Type |
Distance condition: PARAGRAPH, SENTENCE, or NUMBER
|
PushPostingsWriterBase |
Extension of PostingsWriterBase , adding a push
API for writing each element of the postings.
|
Query |
The abstract base class for queries.
|
QueryAnalyzer |
Class to analyze and extract terms from a lucene query, to be used by
a Presearcher in indexing.
|
QueryAutoStopWordAnalyzer |
An Analyzer used primarily at query time to wrap another analyzer and provide a layer of protection
which prevents very common words from being passed into queries.
|
QueryBitSetProducer |
|
QueryBuilder |
This interface is used by implementors classes that builds some kind of
object from a query tree.
|
QueryBuilder |
Implemented by objects that produce Lucene Query objects from XML streams.
|
QueryBuilder |
Creates queries from the Analyzer chain.
|
QueryBuilder.TermAndBoost |
Wraps a term and boost
|
QueryBuilderFactory |
|
QueryCache |
A cache for queries.
|
QueryCacheEntry |
|
QueryCachingPolicy |
A policy defining which filters should be cached.
|
QueryConfigHandler |
This class can be used to hold any query configuration and no field
configuration.
|
QueryDecomposer |
Split a disjunction query into its consituent parts, so that they can be indexed
and run separately in the Monitor.
|
QueryDocValues |
|
QueryIndex |
|
QueryIndex.CachePopulator |
|
QueryIndex.DataValues |
|
QueryIndex.FIELDS |
|
QueryIndex.Indexable |
|
QueryIndex.MonitorQueryCollector |
A Collector that decodes the stored query for each document hit.
|
QueryIndex.QueryBuilder |
|
QueryIndex.QueryCollector |
|
QueryIndex.QueryTermFilter |
|
QueryMatch |
Represents a match for a specific query and document
|
QueryNode |
A QueryNode is a interface implemented by all nodes on a QueryNode
tree.
|
QueryNodeError |
Error class with NLS support
|
QueryNodeException |
This exception should be thrown if something wrong happens when dealing with
QueryNode s.
|
QueryNodeImpl |
|
QueryNodeOperation |
Allow joining 2 QueryNode Trees, into one.
|
QueryNodeOperation.ANDOperation |
|
QueryNodeParseException |
This should be thrown when an exception happens during the query parsing from
string to the query node tree.
|
QueryNodeProcessor |
|
QueryNodeProcessorImpl |
This is a default implementation for the QueryNodeProcessor
interface, it's an abstract class, so it should be extended by classes that
want to process a QueryNode tree.
|
QueryNodeProcessorImpl.ChildrenList |
|
QueryNodeProcessorPipeline |
|
QueryParser |
This class is generated by JavaCC.
|
QueryParser |
This class is generated by JavaCC.
|
QueryParser.JJCalls |
|
QueryParser.JJCalls |
|
QueryParser.LookaheadSuccess |
|
QueryParser.LookaheadSuccess |
|
QueryParser.Operator |
The default operator for parsing queries.
|
QueryParserBase |
This class is overridden by QueryParser in QueryParser.jj
and acts to separate the majority of the Java code from the .jj grammar file.
|
QueryParserConstants |
Token literal values and constants.
|
QueryParserConstants |
Token literal values and constants.
|
QueryParserHelper |
This class is a helper for the query parser framework, it does all the three
query parser phrases at once: text parsing, query processing and query
building.
|
QueryParserMessages |
Flexible Query Parser message bundle class
|
QueryParserTokenManager |
Token Manager.
|
QueryParserTokenManager |
Token Manager.
|
QueryParserUtil |
This class defines utility methods to (help) parse query strings into
Query objects.
|
QueryRescorer |
A Rescorer that uses a provided Query to assign
scores to the first-pass hits.
|
QueryScorer |
Scorer implementation which scores text fragments by the number of
unique query terms found.
|
QueryTermExtractor |
Utility class used to extract the terms used in a query, plus any weights.
|
QueryTermExtractor.BoostedTermExtractor |
|
QueryTermScorer |
Scorer implementation which scores text fragments by the number of
unique query terms found.
|
QueryTimeListener |
Notified of the time it takes to run individual queries against a set of documents
|
QueryTimeout |
Base for query timeout implementations, which will provide a shouldExit() method,
used with ExitableDirectoryReader .
|
QueryTimeoutImpl |
|
QueryTree |
A representation of a node in a query tree
Queries are analyzed and converted into an abstract tree, consisting
of conjunction and disjunction nodes, and leaf nodes containing terms.
|
QueryTree.ConjunctionQueryTree |
|
QueryTree.DisjunctionQueryTree |
|
QueryTreeBuilder |
This class should be used when there is a builder for each type of node.
|
QueryValueSource |
QueryValueSource returns the relevance score of the query
|
QueryVisitor |
Allows recursion through a query tree
|
QuotedFieldQueryNode |
|
RadixSelector |
Radix selector.
|
RAFDirectory |
A straightforward implementation of FSDirectory
using java.io.RandomAccessFile.
|
RAFDirectory.RAFIndexInput |
Reads bytes with RandomAccessFile.seek(long) followed by
RandomAccessFile.read(byte[], int, int) .
|
RAMDirectory |
Deprecated.
|
RAMFile |
Deprecated.
|
RAMInputStream |
Deprecated.
|
RAMOutputStream |
Deprecated.
|
RamUsageEstimator |
Estimates the size (memory representation) of Java objects.
|
RamUsageEstimator.RamUsageQueryVisitor |
|
RamUsageUtil |
Utility methods to estimate the RAM usage of objects.
|
RandomAccessInput |
Random Access Index API.
|
RangeFieldQuery |
|
RangeFieldQuery.QueryType |
Used by RangeFieldQuery to check how each internal or leaf node relates to the query.
|
RangeMapFloatFunction |
RangeMapFloatFunction implements a map function over
another ValueSource whose values fall within min and max inclusive to target.
|
RangeQueryBuilder |
|
RangeQueryNode<T extends FieldValuePairQueryNode<?>> |
This interface should be implemented by a QueryNode that represents
some kind of range query.
|
RateLimitedIndexOutput |
|
RateLimiter |
Abstract base class to rate limit IO.
|
RateLimiter.SimpleRateLimiter |
Simple class to rate limit IO.
|
ReaderManager |
Utility class to safely share DirectoryReader instances across
multiple threads, while periodically reopening.
|
ReaderPool |
Holds shared SegmentReader instances.
|
ReadersAndUpdates |
|
ReadersAndUpdates.MergedDocValues<DocValuesInstance extends DocValuesIterator> |
This class merges the current on-disk DV with an incoming update DV instance and merges the two instances
giving the incoming update precedence in terms of values, in other words the values of the update always
wins over the on-disk version.
|
ReaderSlice |
Subreader slice from a parent composite reader.
|
ReaderUtil |
|
ReadingAttribute |
Attribute for Kuromoji reading data
|
ReadingAttribute |
Attribute for Korean reading data
|
ReadingAttributeImpl |
Attribute for Kuromoji reading data
|
ReadingAttributeImpl |
Attribute for Korean reading data
|
ReciprocalFloatFunction |
ReciprocalFloatFunction implements a reciprocal function f(x) = a/(mx+b), based on
the float value of a field or function as exported by ValueSource .
|
Rectangle |
Represents a lat/lon rectangle.
|
Rectangle2D |
2D rectangle implementation containing cartesian spatial logic.
|
RecyclingByteBlockAllocator |
|
RecyclingIntBlockAllocator |
|
Reduce |
The Reduce object is used to remove gaps in a Trie which stores a dictionary.
|
RefCount<T> |
Manages reference counting for a given object.
|
ReferenceManager<G> |
Utility class to safely share instances of a certain type across multiple
threads, while periodically refreshing them.
|
ReferenceManager.RefreshListener |
Use to receive notification when a refresh has
finished.
|
RegexCompletionQuery |
A CompletionQuery which takes a regular expression
as the prefix of the query term.
|
RegExp |
Regular Expression extension to Automaton .
|
RegExp.Kind |
The type of expression represented by a RegExp node.
|
RegexpQuery |
|
RegexpQueryHandler |
A query handler implementation that matches Regexp queries by indexing regex
terms by their longest static substring, and generates ngrams from Document
tokens to match them.
|
RegexpQueryNode |
|
RegexpQueryNodeBuilder |
|
RegexpQueryNodeProcessor |
Processor for Regexp queries.
|
RelativeIterator |
|
RemoveDeletedQueryNodesProcessor |
|
RemoveDuplicatesTokenFilter |
A TokenFilter which filters out Tokens at the same position and Term text as the previous token in the stream.
|
RemoveDuplicatesTokenFilterFactory |
|
RemoveEmptyNonLeafQueryNodeProcessor |
This processor removes every QueryNode that is not a leaf and has not
children.
|
RepeatingIntervalsSource |
Generates an iterator that spans repeating instances of a sub-iterator,
avoiding minimization.
|
RepeatingIntervalsSource.DuplicateIntervalIterator |
|
RepeatingIntervalsSource.DuplicateMatchesIterator |
|
ReqExclBulkScorer |
|
ReqExclScorer |
A Scorer for queries with a required subscorer
and an excluding (prohibited) sub Scorer .
|
ReqOptSumScorer |
A Scorer for queries with a required part and an optional part.
|
Rescorer |
Re-scores the topN results ( TopDocs ) from an original
query.
|
ResourceLoader |
Abstraction for loading resources (streams, files, and classes).
|
ResourceLoaderAware |
Interface for a component that needs to be initialized by
an implementation of ResourceLoader .
|
ReusableStringReader |
|
ReverseBytesReader |
Reads in reverse from a single byte[].
|
ReversePathHierarchyTokenizer |
Tokenizer for domain-like hierarchies.
|
ReverseRandomAccessReader |
Implements reverse read from a RandomAccessInput.
|
ReverseStringFilter |
Reverse token string, for example "country" => "yrtnuoc".
|
ReverseStringFilterFactory |
|
RewriteQuery<SQ extends SrndQuery> |
|
RoaringDocIdSet |
DocIdSet implementation inspired from http://roaringbitmap.org/
The space is divided into blocks of 2^16 bits and each block is encoded
independently.
|
RoaringDocIdSet.Builder |
|
RoaringDocIdSet.ShortArrayDocIdSet |
DocIdSet implementation that can store documents up to 2^16-1 in a short[].
|
RollingBuffer<T extends RollingBuffer.Resettable> |
Acts like forever growing T[], but internally uses a
circular buffer to reuse instances of T.
|
RollingBuffer.Resettable |
Implement to reset an instance
|
RollingCharBuffer |
Acts like a forever growing char[] as you read
characters into it from the provided reader, but
internally it uses a circular buffer to only hold the
characters that haven't been freed yet.
|
RomanianAnalyzer |
|
RomanianAnalyzer.DefaultSetHolder |
Atomically loads the DEFAULT_STOP_SET in a lazy fashion once the outer class
accesses the static final set the first time.;
|
RomanianStemmer |
This class was automatically generated by a Snowball to Java compiler
It implements the stemming algorithm defined by a snowball script.
|
Row |
The Row class represents a row in a matrix representation of a trie.
|
RSLPStemmerBase |
Base class for stemmers that use a set of RSLP-like stemming steps.
|
RSLPStemmerBase.Rule |
A basic rule, with no exceptions.
|
RSLPStemmerBase.RuleWithSetExceptions |
A rule with a set of whole-word exceptions.
|
RSLPStemmerBase.RuleWithSuffixExceptions |
A rule with a set of exceptional suffixes.
|
RSLPStemmerBase.Step |
A step containing a list of rules.
|
RunAutomaton |
Finite-state automaton with fast run operation.
|
RussianAnalyzer |
|
RussianAnalyzer.DefaultSetHolder |
|
RussianLightStemFilter |
|
RussianLightStemFilterFactory |
|
RussianLightStemmer |
Light Stemmer for Russian.
|
RussianStemmer |
This class was automatically generated by a Snowball to Java compiler
It implements the stemming algorithm defined by a snowball script.
|
SameThreadExecutorService |
An ExecutorService that executes tasks immediately in the calling thread during submit.
|
ScaleFloatFunction |
Scales values to be between min and max.
|
ScaleFloatFunction.ScaleInfo |
|
ScandinavianFoldingFilter |
This filter folds Scandinavian characters åÅäæÄÆ->a and öÖøØ->o.
|
ScandinavianFoldingFilterFactory |
|
ScandinavianNormalizationFilter |
This filter normalize use of the interchangeable Scandinavian characters æÆäÄöÖøØ
and folded variants (aa, ao, ae, oe and oo) by transforming them to åÅæÆøØ.
|
ScandinavianNormalizationFilterFactory |
|
Scorable |
Allows access to the score of a Query
|
Scorable.ChildScorable |
A child Scorer and its relationship to its parent.
|
ScoreAndDoc |
|
ScoreCachingWrappingScorer |
A Scorer which wraps another scorer and caches the score of the
current document.
|
ScoreDoc |
|
ScoreMode |
How to aggregate multiple child hit scores into a single parent score.
|
ScoreMode |
Different modes of search.
|
ScoreOrderFragmentsBuilder |
An implementation of FragmentsBuilder that outputs score-order fragments.
|
ScoreOrderFragmentsBuilder.ScoreComparator |
|
Scorer |
A Scorer is responsible for scoring a stream of tokens.
|
Scorer |
Expert: Common scoring functionality for different types of queries.
|
ScorerSupplier |
|
ScoringMatch |
A QueryMatch that reports scores for each match
|
ScoringRewrite<B> |
Base rewrite method that translates each term into a query, and keeps
the scores as computed by the query.
|
ScoringRewrite.TermFreqBoostByteStart |
Special implementation of BytesStartArray that keeps parallel arrays for boost and docFreq
|
ScriptAttribute |
This attribute stores the UTR #24 script value for a token of text.
|
ScriptAttributeImpl |
|
ScriptIterator |
An iterator that locates ISO 15924 script boundaries in text.
|
SearcherFactory |
|
SearcherLifetimeManager |
Keeps track of current plus old IndexSearchers, closing
the old ones once they have timed out.
|
SearcherLifetimeManager.PruneByAge |
Simple pruner that drops any searcher older by
more than the specified seconds, than the newest
searcher.
|
SearcherLifetimeManager.Pruner |
|
SearcherLifetimeManager.SearcherTracker |
|
SearcherManager |
Utility class to safely share IndexSearcher instances across multiple
threads, while periodically reopening.
|
SearchGroup<T> |
Represents a group that is found during the first pass search.
|
SearchGroup.GroupComparator<T> |
|
SearchGroup.GroupMerger<T> |
|
SearchGroup.MergedGroup<T> |
|
SearchGroup.ShardIter<T> |
|
SecondPassGroupingCollector<T> |
SecondPassGroupingCollector runs over an already collected set of
groups, further applying a GroupReducer to each group
|
SeekingTermSetTermsEnum |
A filtered TermsEnum that uses a BytesRefHash as a filter
|
SegGraph |
Graph representing possible tokens at each start offset in the sentence.
|
SegmentCacheable |
|
SegmentCommitInfo |
Embeds a [read-only] SegmentInfo and adds per-commit
fields.
|
SegmentCoreReaders |
Holds core readers that are shared (unchanged) when
SegmentReader is cloned or reopened
|
SegmentDocValues |
|
SegmentDocValuesProducer |
Encapsulates multiple producers when there are docvalues updates as one producer
|
SegmentInfo |
Information about a segment such as its name, directory, and files related
to the segment.
|
SegmentInfoFormat |
Expert: Controls the format of the
SegmentInfo (segment metadata file).
|
SegmentInfos |
A collection of segmentInfo objects with methods for operating on those
segments in relation to the file system.
|
SegmentInfos.FindSegmentsFile<T> |
Utility class for executing code that needs to do
something with the current segments file.
|
SegmentingTokenizerBase |
Breaks text into sentences with a BreakIterator and
allows subclasses to decompose these sentences into words.
|
SegmentMerger |
The SegmentMerger class combines two or more Segments, represented by an
IndexReader, into a single Segment.
|
SegmentReader |
IndexReader implementation over a single segment.
|
SegmentReadState |
Holder class for common parameters used during read.
|
SegmentTermsEnum |
Iterates through terms in this field.
|
SegmentTermsEnumFrame |
|
SegmentWriteState |
Holder class for common parameters used during write.
|
SegToken |
SmartChineseAnalyzer internal token
|
SegTokenFilter |
Filters a SegToken by converting full-width latin to half-width, then lowercasing latin.
|
SegTokenPair |
|
Selector |
An implementation of a selection algorithm, ie.
|
SentinelIntSet |
A native int hash-based set where one value is reserved to mean "EMPTY" internally.
|
SerbianNormalizationFilter |
Normalizes Serbian Cyrillic and Latin characters to "bald" Latin.
|
SerbianNormalizationFilterFactory |
|
SerbianNormalizationRegularFilter |
Normalizes Serbian Cyrillic to Latin.
|
SerialMergeScheduler |
A MergeScheduler that simply does each merge
sequentially, using the current thread.
|
SetKeywordMarkerFilter |
|
SetOnce<T> |
A convenient class which offers a semi-immutable object wrapper
implementation which allows one to set the value of an object exactly once,
and retrieve it many times.
|
SetOnce.AlreadySetException |
|
SetOnce.Wrapper<T> |
Holding object and marking that it was already set
|
ShapeField |
A base shape utility class used for both LatLon (spherical) and XY (cartesian) shape fields.
|
ShapeField.DecodedTriangle |
|
ShapeField.DecodedTriangle.TYPE |
type of triangle
|
ShapeField.QueryRelation |
Query Relation Types
|
ShapeField.Triangle |
polygons are decomposed into tessellated triangles using Tessellator
these triangles are encoded and inserted as separate indexed POINT fields
|
ShapeQuery |
|
ShapeQuery.RelationScorerSupplier |
utility class for implementing constant score logic specific to INTERSECT, WITHIN, and DISJOINT
|
ShingleAnalyzerWrapper |
|
ShingleFilter |
A ShingleFilter constructs shingles (token n-grams) from a token stream.
|
ShingleFilter.InputWindowToken |
|
ShingleFilterFactory |
|
Similarity |
Similarity defines the components of Lucene scoring.
|
Similarity.SimScorer |
Stores the weight for a query across the indexed collection.
|
SimilarityBase |
A subclass of Similarity that provides a simplified API for its
descendants.
|
SimpleAnalyzer |
|
SimpleBindings |
|
SimpleBoolFunction |
BoolFunction implementation which applies an extendible boolean
function to the values of a single wrapped ValueSource .
|
SimpleBoundaryScanner |
Simple boundary scanner implementation that divides fragments
based on a set of separator characters.
|
SimpleCollector |
Base Collector implementation that is used to collect all contexts.
|
SimpleFieldComparator<T> |
|
SimpleFieldFragList |
|
SimpleFloatFunction |
A simple float function with a single argument
|
SimpleFragListBuilder |
|
SimpleFragmenter |
Fragmenter implementation which breaks text up into same-size
fragments with no concerns over spotting sentence boundaries.
|
SimpleFragmentsBuilder |
A simple implementation of FragmentsBuilder.
|
SimpleFSDirectory |
Deprecated.
|
SimpleFSDirectory.SimpleFSIndexInput |
Reads bytes with SeekableByteChannel.read(ByteBuffer)
|
SimpleFSLockFactory |
Implements LockFactory using Files.createFile(java.nio.file.Path, java.nio.file.attribute.FileAttribute<?>...) .
|
SimpleFSLockFactory.SimpleFSLock |
|
SimpleGeoJSONPolygonParser |
Does minimal parsing of a GeoJSON object, to extract either Polygon or MultiPolygon, either directly as the top-level type, or if
the top-level type is Feature, as the geometry of that feature.
|
SimpleHTMLEncoder |
Simple Encoder implementation to escape text for HTML output
|
SimpleHTMLFormatter |
Simple Formatter implementation to highlight terms with a pre and
post tag.
|
SimpleMergedSegmentWarmer |
A very simple merged segment warmer that just ensures
data structures are initialized.
|
SimpleNaiveBayesClassifier |
A simplistic Lucene based NaiveBayes classifier, see http://en.wikipedia.org/wiki/Naive_Bayes_classifier
|
SimpleNaiveBayesDocumentClassifier |
A simplistic Lucene based NaiveBayes classifier, see http://en.wikipedia.org/wiki/Naive_Bayes_classifier
|
SimplePatternSplitTokenizer |
This tokenizer uses a Lucene RegExp or (expert usage) a pre-built determinized Automaton , to locate tokens.
|
SimplePatternSplitTokenizerFactory |
|
SimplePatternTokenizer |
This tokenizer uses a Lucene RegExp or (expert usage) a pre-built determinized Automaton , to locate tokens.
|
SimplePatternTokenizerFactory |
|
SimpleQueryParser |
SimpleQueryParser is used to parse human readable query syntax.
|
SimpleQueryParser.State |
|
SimpleSpanFragmenter |
Fragmenter implementation which breaks text up into same-size
fragments but does not split up Spans .
|
SimpleTerm |
Base class for queries that expand to sets of simple terms.
|
SimpleTerm.MatchingTermVisitor |
|
SimpleTermRewriteQuery |
|
SimpleTextBKDReader |
Forked from BKDReader and simplified/specialized for SimpleText's usage
|
SimpleTextBKDReader.IntersectState |
|
SimpleTextBKDWriter |
Forked from BKDWriter and simplified/specialized for SimpleText's usage
|
SimpleTextCodec |
plain text index format.
|
SimpleTextCompoundFormat |
plain text compound format.
|
SimpleTextDocValuesFormat |
plain text doc values format.
|
SimpleTextDocValuesReader |
|
SimpleTextDocValuesReader.DocValuesIterator |
|
SimpleTextDocValuesReader.OneField |
|
SimpleTextDocValuesWriter |
|
SimpleTextFieldInfosFormat |
plaintext field infos format
|
SimpleTextFieldsReader |
|
SimpleTextFieldsWriter |
|
SimpleTextLiveDocsFormat |
reads/writes plaintext live docs
|
SimpleTextLiveDocsFormat.SimpleTextBits |
|
SimpleTextNormsFormat |
plain-text norms format.
|
SimpleTextNormsFormat.SimpleTextNormsConsumer |
Writes plain-text norms.
|
SimpleTextNormsFormat.SimpleTextNormsProducer |
Reads plain-text norms.
|
SimpleTextPointsFormat |
For debugging, curiosity, transparency only!! Do not
use this codec in production.
|
SimpleTextPointsReader |
|
SimpleTextPointsWriter |
|
SimpleTextPostingsFormat |
For debugging, curiosity, transparency only!! Do not
use this codec in production.
|
SimpleTextSegmentInfoFormat |
plain text segments file format.
|
SimpleTextSegmentInfoFormat.BytesRefOutput |
|
SimpleTextStoredFieldsFormat |
plain text stored fields format.
|
SimpleTextStoredFieldsReader |
reads plaintext stored fields
|
SimpleTextStoredFieldsWriter |
Writes plain-text stored fields.
|
SimpleTextTermVectorsFormat |
plain text term vectors format.
|
SimpleTextTermVectorsReader |
Reads plain-text term vectors.
|
SimpleTextTermVectorsReader.SimpleTVDocsEnum |
|
SimpleTextTermVectorsReader.SimpleTVFields |
|
SimpleTextTermVectorsReader.SimpleTVPostings |
|
SimpleTextTermVectorsReader.SimpleTVPostingsEnum |
|
SimpleTextTermVectorsReader.SimpleTVTerms |
|
SimpleTextTermVectorsReader.SimpleTVTermsEnum |
|
SimpleTextTermVectorsWriter |
Writes plain-text term vectors.
|
SimpleTextUtil |
|
SimpleWKTShapeParser |
Parses shape geometry represented in WKT format
complies with OGC® document: 12-063r5 and ISO/IEC 13249-3:2016 standard
located at http://docs.opengeospatial.org/is/12-063r5/12-063r5.html
|
SimpleWKTShapeParser.ShapeType |
Enumerated type for Shapes
|
SingleDocsEnum |
|
SingleFragListBuilder |
|
SingleFunction |
A function with a single argument
|
SingleInstanceLockFactory |
Implements LockFactory for a single in-process instance,
meaning all locking will take place through this one instance.
|
SinglePostingsEnum |
|
SingleTermsEnum |
Subclass of FilteredTermsEnum for enumerating a single term.
|
SingletonSortedNumericDocValues |
Exposes multi-valued view over a single-valued instance.
|
SingletonSortedSetDocValues |
Exposes multi-valued iterator view over a single-valued iterator.
|
SleepingLockWrapper |
Directory that wraps another, and that sleeps and retries
if obtaining the lock fails.
|
SloppyMath |
Math functions that trade off accuracy for speed.
|
SloppyPhraseMatcher |
Find all slop-valid position-combinations (matches)
encountered while traversing/hopping the PhrasePositions.
|
SlopQueryNode |
|
SlopQueryNodeBuilder |
|
SlowCodecReaderWrapper |
Wraps arbitrary readers for merging.
|
SlowImpactsEnum |
ImpactsEnum that doesn't index impacts but implements the API in a
legal way.
|
SlowLog |
Reports on slow queries in a given match run
|
SlowLog.Entry |
An individual entry in the slow log
|
SmallFloat |
Floating point numbers smaller than 32 bits.
|
SmartChineseAnalyzer |
SmartChineseAnalyzer is an analyzer for Chinese or mixed Chinese-English text.
|
SmartChineseAnalyzer.DefaultSetHolder |
Atomically loads the DEFAULT_STOP_SET in a lazy fashion once the outer class
accesses the static final set the first time.;
|
SnapshotDeletionPolicy |
|
SnowballFilter |
A filter that stems words using a Snowball-generated stemmer.
|
SnowballPorterFilterFactory |
|
SnowballProgram |
This is the rev 502 of the Snowball SVN trunk,
now located at GitHub,
but modified:
made abstract and introduced abstract method stem to avoid expensive reflection in filter class.
|
SoftDeletesDirectoryReaderWrapper |
This reader filters out documents that have a doc values value in the given field and treat these
documents as soft deleted.
|
SoftDeletesDirectoryReaderWrapper.DelegatingCacheHelper |
|
SoftDeletesDirectoryReaderWrapper.SoftDeletesFilterCodecReader |
|
SoftDeletesDirectoryReaderWrapper.SoftDeletesFilterLeafReader |
|
SoftDeletesDirectoryReaderWrapper.SoftDeletesSubReaderWrapper |
|
SoftDeletesRetentionMergePolicy |
This MergePolicy allows to carry over soft deleted documents across merges.
|
SolrSynonymParser |
Parser for the Solr synonyms format.
|
SoraniAnalyzer |
|
SoraniAnalyzer.DefaultSetHolder |
Atomically loads the DEFAULT_STOP_SET in a lazy fashion once the outer class
accesses the static final set the first time.;
|
SoraniNormalizationFilter |
|
SoraniNormalizationFilterFactory |
|
SoraniNormalizer |
Normalizes the Unicode representation of Sorani text.
|
SoraniStemFilter |
|
SoraniStemFilterFactory |
|
SoraniStemmer |
Light stemmer for Sorani
|
Sort |
Encapsulates sort criteria for returned hits.
|
SortableBytesRefArray |
|
SortedDocValues |
A per-document byte[] with presorted values.
|
SortedDocValuesField |
Field that stores
a per-document BytesRef value, indexed for
sorting.
|
SortedDocValuesTermsEnum |
|
SortedDocValuesWriter |
Buffers up pending byte[] per doc, deref and sorting via
int ord, then flushes when segment flushes.
|
SortedDocValuesWriter.BufferedSortedDocValues |
|
SortedInputIterator |
This wrapper buffers incoming elements and makes sure they are sorted based on given comparator.
|
SortedIntSet |
|
SortedIntSet.FrozenIntSet |
|
SortedNumericDocValues |
A list of per-document numeric values, sorted
according to Long.compare(long, long) .
|
SortedNumericDocValuesField |
Field that stores a per-document long values for scoring,
sorting or value retrieval.
|
SortedNumericDocValuesRangeQuery |
|
SortedNumericDocValuesWriter |
Buffers up pending long[] per doc, sorts, then flushes when segment flushes.
|
SortedNumericDocValuesWriter.BufferedSortedNumericDocValues |
|
SortedNumericSelector |
Selects a value from the document's list to use as the representative value
|
SortedNumericSelector.MaxValue |
Wraps a SortedNumericDocValues and returns the last value (max)
|
SortedNumericSelector.MinValue |
Wraps a SortedNumericDocValues and returns the first value (min)
|
SortedNumericSelector.Type |
Type of selection to perform.
|
SortedNumericSortField |
|
SortedNumericSortField.Provider |
A SortFieldProvider for this sort field
|
SortedSetDocValues |
|
SortedSetDocValuesField |
Field that stores
a set of per-document BytesRef values, indexed for
faceting,grouping,joining.
|
SortedSetDocValuesRangeQuery |
|
SortedSetDocValuesTermsEnum |
|
SortedSetDocValuesWriter |
Buffers up pending byte[]s per doc, deref and sorting via
int ord, then flushes when segment flushes.
|
SortedSetDocValuesWriter.BufferedSortedSetDocValues |
|
SortedSetFieldSource |
Retrieves FunctionValues instances for multi-valued string based fields.
|
SortedSetSelector |
Selects a value from the document's set to use as the representative value
|
SortedSetSelector.MaxValue |
Wraps a SortedSetDocValues and returns the last ordinal (max)
|
SortedSetSelector.MiddleMaxValue |
Wraps a SortedSetDocValues and returns the middle ordinal (or max of the two)
|
SortedSetSelector.MiddleMinValue |
Wraps a SortedSetDocValues and returns the middle ordinal (or min of the two)
|
SortedSetSelector.MinValue |
Wraps a SortedSetDocValues and returns the first ordinal (min)
|
SortedSetSelector.Type |
Type of selection to perform.
|
SortedSetSortField |
|
SortedSetSortField.Provider |
A SortFieldProvider for this sort
|
Sorter |
Sorts documents of a given index by returning a permutation on the document
IDs.
|
Sorter |
Base class for sorting algorithms implementations.
|
Sorter.DocMap |
A permutation of doc IDs.
|
Sorter.DocValueSorter |
|
SortField |
Stores information about how to sort documents by terms in an individual
field.
|
SortField.Provider |
A SortFieldProvider for field sorts
|
SortField.Type |
Specifies the type of the terms to be sorted, or special types such as CUSTOM
|
SortFieldProvider |
Reads/Writes a named SortField from a segment info file, used to record index sorts
|
SortFieldProvider.Holder |
|
SortingLeafReader |
|
SortingLeafReader.CachedBinaryDVs |
|
SortingLeafReader.CachedNumericDVs |
|
SortingLeafReader.SortingBinaryDocValues |
|
SortingLeafReader.SortingBits |
|
SortingLeafReader.SortingDocsEnum |
|
SortingLeafReader.SortingDocsEnum.DocFreqSorter |
|
SortingLeafReader.SortingFields |
|
SortingLeafReader.SortingNumericDocValues |
|
SortingLeafReader.SortingPointValues |
|
SortingLeafReader.SortingPostingsEnum |
|
SortingLeafReader.SortingPostingsEnum.DocOffsetSorter |
A TimSorter which sorts two parallel arrays of doc IDs and
offsets in one go.
|
SortingLeafReader.SortingSortedDocValues |
|
SortingLeafReader.SortingSortedNumericDocValues |
|
SortingLeafReader.SortingSortedSetDocValues |
|
SortingLeafReader.SortingTerms |
|
SortingLeafReader.SortingTermsEnum |
|
SortingStoredFieldsConsumer |
|
SortingStoredFieldsConsumer.CopyVisitor |
|
SortingTermVectorsConsumer |
|
SortRescorer |
A Rescorer that re-sorts according to a provided
Sort.
|
SpanBoostQuery |
|
SpanBuilderBase |
|
SpanCollector |
An interface defining the collection of postings information from the leaves
of a Spans
|
SpanContainingQuery |
Keep matches that contain another SpanScorer.
|
SpanContainQuery |
|
SpanFirstBuilder |
|
SpanFirstQuery |
Matches spans near the beginning of a field.
|
SpanGradientFormatter |
Formats text with different color intensity depending on the score of the
term using the span tag.
|
SpanishAnalyzer |
|
SpanishAnalyzer.DefaultSetHolder |
Atomically loads the DEFAULT_STOP_SET in a lazy fashion once the outer class
accesses the static final set the first time.;
|
SpanishLightStemFilter |
|
SpanishLightStemFilterFactory |
|
SpanishLightStemmer |
Light Stemmer for Spanish
|
SpanishMinimalStemFilter |
|
SpanishMinimalStemFilterFactory |
|
SpanishMinimalStemmer |
Minimal plural stemmer for Spanish.
|
SpanishStemmer |
This class was automatically generated by a Snowball to Java compiler
It implements the stemming algorithm defined by a snowball script.
|
SpanMultiTermQueryWrapper<Q extends MultiTermQuery> |
|
SpanMultiTermQueryWrapper.SpanRewriteMethod |
Abstract class that defines how the query is rewritten.
|
SpanMultiTermQueryWrapper.TopTermsSpanBooleanQueryRewrite |
A rewrite method that first translates each term into a SpanTermQuery in a
BooleanClause.Occur.SHOULD clause in a BooleanQuery, and keeps the
scores as computed by the query.
|
SpanNearBuilder |
|
SpanNearClauseFactory |
|
SpanNearQuery |
Matches spans which are near one another.
|
SpanNearQuery.Builder |
A builder for SpanNearQueries
|
SpanNearQuery.GapSpans |
|
SpanNearQuery.SpanGapQuery |
|
SpanNotBuilder |
|
SpanNotQuery |
Removes matches which overlap with another SpanQuery or which are
within x tokens before or y tokens after another SpanQuery.
|
SpanOrBuilder |
|
SpanOrQuery |
Matches the union of its clauses.
|
SpanOrTermsBuilder |
|
SpanPayloadCheckQuery |
Only return those matches that have a specific payload at the given position.
|
SpanPositionCheckQuery |
Base class for filtering a SpanQuery based on the position of a match.
|
SpanPositionQueue |
|
SpanPositionRangeBuilder |
|
SpanPositionRangeQuery |
|
SpanQuery |
Base class for span-based queries.
|
SpanQueryBuilder |
|
SpanQueryBuilderFactory |
|
Spans |
Iterates through combinations of start/end positions per-doc.
|
SpanScorer |
|
SpanTermBuilder |
|
SpanTermQuery |
Matches spans containing a term.
|
SpanWeight |
Expert-only.
|
SpanWeight.Postings |
Enumeration defining what postings information should be retrieved from the
index for a given Spans
|
SpanWeight.TermMatch |
|
SpanWithinQuery |
Keep matches that are contained within another Spans.
|
SparseFixedBitSet |
A bit set that only stores longs that have at least one bit which is set.
|
SpellChecker |
Spell Checker class (Main class).
(initially inspired by the David Spencer code).
|
SPIClassIterator<S> |
Helper class for loading SPI classes from classpath (META-INF files).
|
SplittingBreakIterator |
Virtually slices the text on both sides of every occurrence of the specified character.
|
SrndBooleanQuery |
|
SrndPrefixQuery |
Query that matches String prefixes
|
SrndQuery |
Lowest level base class for surround queries
|
SrndTermQuery |
Simple single-term clause
|
SrndTruncQuery |
Query that matches wildcards
|
StandardAnalyzer |
|
StandardDirectoryReader |
|
StandardDirectoryReader.ReaderCommit |
|
StandardQueryBuilder |
This interface should be implemented by every class that wants to build
Query objects from QueryNode objects.
|
StandardQueryConfigHandler |
|
StandardQueryConfigHandler.ConfigurationKeys |
Class holding keys for StandardQueryNodeProcessorPipeline options.
|
StandardQueryConfigHandler.Operator |
Boolean Operator: AND or OR
|
StandardQueryNodeProcessorPipeline |
This pipeline has all the processors needed to process a query node tree,
generated by StandardSyntaxParser , already assembled.
|
StandardQueryParser |
This class is a helper that enables users to easily use the Lucene query
parser.
|
StandardQueryTreeBuilder |
This query tree builder only defines the necessary map to build a
Query tree object.
|
StandardSyntaxParser |
Parser for the standard Lucene syntax
|
StandardSyntaxParser.JJCalls |
|
StandardSyntaxParser.LookaheadSuccess |
|
StandardSyntaxParserConstants |
Token literal values and constants.
|
StandardSyntaxParserTokenManager |
Token Manager.
|
StandardTokenizer |
A grammar-based tokenizer constructed with JFlex.
|
StandardTokenizerFactory |
|
StandardTokenizerImpl |
|
StatePair |
Pair of states.
|
Stats |
|
STBlockLine |
|
STBlockLine.Serializer |
Reads block lines encoded incrementally, with all fields corresponding
to the term of the line.
|
STBlockReader |
Reads terms blocks with the Shared Terms format.
|
STBlockWriter |
Writes terms blocks with the Shared Terms format.
|
Stemmer |
Stemmer uses the affix rules declared in the Dictionary to generate one or more stems for a word.
|
StemmerOverrideFilter |
Provides the ability to override any KeywordAttribute aware stemmer
with custom dictionary-based stemming.
|
StemmerOverrideFilter.Builder |
|
StemmerOverrideFilter.StemmerOverrideMap |
A read-only 4-byte FST backed map that allows fast case-insensitive key
value lookups for StemmerOverrideFilter
|
StemmerOverrideFilterFactory |
|
StemmerUtil |
Some commonly-used stemming functions
|
StempelFilter |
Transforms the token stream as per the stemming algorithm.
|
StempelPolishStemFilterFactory |
|
StempelStemmer |
Stemmer class is a convenient facade for other stemmer-related classes.
|
STIntersectBlockReader |
|
STMergingBlockReader |
TermsEnum used when merging segments,
to enumerate the terms of an input segment and get all the fields TermState s
of each term.
|
STMergingTermsEnum |
Combines PostingsEnum for the same term for a given field from
multiple segments.
|
StopAnalyzer |
|
StopFilter |
Removes stop words from a token stream.
|
StopFilter |
Removes stop words from a token stream.
|
StopFilterFactory |
|
StopwordAnalyzerBase |
Base class for Analyzers that need to make use of stopword sets.
|
StoredField |
|
StoredFieldsConsumer |
|
StoredFieldsFormat |
Controls the format of stored fields
|
StoredFieldsReader |
Codec API for reading stored fields.
|
StoredFieldsWriter |
|
StoredFieldsWriter.StoredFieldsMergeSub |
|
StoredFieldVisitor |
Expert: provides a low-level means of accessing the stored field
values in an index.
|
StoredFieldVisitor.Status |
|
StrDocValues |
Abstract FunctionValues implementation which supports retrieving String values.
|
StrictStringTokenizer |
Used for parsing Version strings so we don't have to
use overkill String.split nor StringTokenizer (which silently
skips empty tokens).
|
StringDistance |
Interface for string distances.
|
StringField |
A field that is indexed but not tokenized: the entire
String value is indexed as a single token.
|
StringHelper |
Methods for manipulating strings.
|
StringMSBRadixSorter |
|
StringUtils |
String manipulation routines
|
STUniformSplitPostingsFormat |
PostingsFormat based on the Uniform Split technique and supporting
Shared Terms.
|
STUniformSplitTerms |
Extends UniformSplitTerms for a shared-terms dictionary, with
all the fields of a term in the same block line.
|
STUniformSplitTermsReader |
A block-based terms index and dictionary based on the Uniform Split technique,
and sharing all the fields terms in the same dictionary, with all the fields
of a term in the same block line.
|
STUniformSplitTermsWriter |
Extends UniformSplitTermsWriter by sharing all the fields terms
in the same dictionary and by writing all the fields of a term in the same
block line.
|
STUniformSplitTermsWriter.FieldsIterator |
|
STUniformSplitTermsWriter.SharedTermsWriter |
|
SuffixingNGramTokenFilter |
|
SuggestField |
Field that indexes a string value and a weight as a weighted completion
against a named suggester.
|
SuggestIndexSearcher |
Adds document suggest capabilities to IndexSearcher.
|
SuggestMode |
Set of strategies for suggesting related terms
|
SuggestScoreDocPriorityQueue |
|
SuggestStopFilter |
Like StopFilter except it will not remove the
last token if that token was not followed by some token
separator.
|
SuggestStopFilterFactory |
|
SuggestWord |
SuggestWord, used in suggestSimilar method in SpellChecker class.
|
SuggestWordFrequencyComparator |
Frequency first, then score.
|
SuggestWordQueue |
Sorts SuggestWord instances
|
SuggestWordScoreComparator |
Score first, then frequency
|
SumFloatFunction |
SumFloatFunction returns the sum of its components.
|
SumPayloadFunction |
Calculate the final score as the sum of scores of all payloads seen.
|
SumTotalTermFreqValueSource |
SumTotalTermFreqValueSource returns the number of tokens.
|
SuppressForbidden |
Annotation to suppress forbidden-apis errors inside a whole class, a method, or a field.
|
SwedishAnalyzer |
|
SwedishAnalyzer.DefaultSetHolder |
Atomically loads the DEFAULT_STOP_SET in a lazy fashion once the outer class
accesses the static final set the first time.;
|
SwedishLightStemFilter |
|
SwedishLightStemFilterFactory |
|
SwedishLightStemmer |
Light Stemmer for Swedish.
|
SwedishStemmer |
This class was automatically generated by a Snowball to Java compiler
It implements the stemming algorithm defined by a snowball script.
|
SweetSpotSimilarity |
A similarity with a lengthNorm that provides for a "plateau" of
equally good lengths, and tf helper functions.
|
SynonymFilter |
Deprecated.
|
SynonymFilter.PendingInput |
|
SynonymFilter.PendingOutputs |
|
SynonymFilterFactory |
Deprecated.
|
SynonymGraphFilter |
Applies single- or multi-token synonyms from a SynonymMap
to an incoming TokenStream , producing a fully correct graph
output.
|
SynonymGraphFilter.BufferedInputToken |
|
SynonymGraphFilter.BufferedOutputToken |
|
SynonymGraphFilterFactory |
|
SynonymMap |
A map of synonyms, keys and values are phrases.
|
SynonymMap.Builder |
Builds an FSTSynonymMap.
|
SynonymMap.Builder.MapEntry |
|
SynonymMap.Parser |
Abstraction for parsing synonym files.
|
SynonymQuery |
A query that treats multiple terms as synonyms.
|
SynonymQuery.Builder |
|
SynonymQuery.DisiWrapperFreq |
|
SynonymQuery.FreqBoostTermScorer |
|
SynonymQuery.SynonymScorer |
|
SynonymQuery.TermAndBoost |
|
SynonymQueryNode |
QueryNode for clauses that are synonym of each other.
|
SynonymQueryNodeBuilder |
|
SyntaxParser |
|
TeeSinkTokenFilter |
This TokenFilter provides the ability to set aside attribute states that have already been analyzed.
|
TeeSinkTokenFilter.SinkTokenStream |
TokenStream output from a tee.
|
TeeSinkTokenFilter.States |
A convenience wrapper for storing the cached states as well the final state of the stream.
|
Term |
A Term represents a word from text.
|
TermAutomatonQuery |
A proximity query that lets you express an automaton, whose
transitions are terms, to match documents.
|
TermAutomatonQuery.EnumAndScorer |
|
TermAutomatonScorer |
|
TermAutomatonScorer.DocIDQueue |
Sorts by docID so we can quickly pull out all scorers that are on
the same (lowest) docID.
|
TermAutomatonScorer.PositionQueue |
Sorts by position so we can visit all scorers on one doc, by
position.
|
TermAutomatonScorer.PosState |
|
TermAutomatonScorer.TermRunAutomaton |
|
TermBytes |
Term of a block line.
|
TermCollectingRewrite<B> |
|
TermCollectingRewrite.TermCollector |
|
TermFilteredPresearcher |
Presearcher implementation that uses terms extracted from queries to index
them in the Monitor, and builds a disjunction from terms in a document to match
them.
|
TermFilteredPresearcher.DocumentQueryBuilder |
Constructs a document disjunction from a set of terms
|
TermFrequencyAttribute |
Sets the custom term frequency of a term within one document.
|
TermFrequencyAttributeImpl |
|
TermFreqValueSource |
|
TermGroupFacetCollector |
An implementation of GroupFacetCollector that computes grouped facets based on the indexed terms
from DocValues.
|
TermGroupFacetCollector.GroupedFacetHit |
|
TermGroupFacetCollector.MV |
|
TermGroupFacetCollector.MV.SegmentResult |
|
TermGroupFacetCollector.SV |
|
TermGroupFacetCollector.SV.SegmentResult |
|
TermGroupSelector |
A GroupSelector implementation that groups via SortedDocValues
|
TermInSetQuery |
|
TermInSetQuery.TermAndState |
|
TermInSetQuery.WeightOrDocIdSet |
|
TermIntervalsSource |
|
TermMatchesIterator |
|
TermQuery |
A Query that matches documents containing a term.
|
TermQueryBuilder |
|
TermRangeQuery |
A Query that matches documents within an range of terms.
|
TermRangeQueryNode |
This query node represents a range query composed by FieldQueryNode
bounds, which means the bound values are strings.
|
TermRangeQueryNodeBuilder |
|
TermRangeQueryNodeProcessor |
|
Terms |
Access to the terms in a specific field.
|
TermsCollector<DV> |
A collector that collects all terms from a specified field matching the query.
|
TermsCollector.MV |
|
TermsCollector.SV |
|
TermScorer |
Expert: A Scorer for documents matching a Term .
|
TermsEnum |
|
TermsEnum.SeekStatus |
|
TermsEnumTokenStream |
|
TermsHash |
This class is passed each token produced by the analyzer
on each field during indexing, and it stores these
tokens in a hash table, and allocates separate byte
streams per token.
|
TermsHashPerField |
This class stores streams of information per term without knowing
the size of the stream ahead of time.
|
TermsHashPerField.PostingsBytesStartArray |
|
TermsIncludingScoreQuery |
|
TermsIndexReaderBase |
BlockTermsReader interacts with an instance of this class
to manage its terms index.
|
TermsIndexReaderBase.FieldIndexEnum |
Similar to TermsEnum, except, the only "metadata" it
reports for a given indexed term is the long fileOffset
into the main terms dictionary file.
|
TermsIndexWriterBase |
|
TermSpans |
Expert:
Public for extension only.
|
TermsQuery |
A query that has an array of terms from a specific field.
|
TermsQueryBuilder |
Builds a BooleanQuery from all of the terms found in the XML element using the choice of analyzer
|
TermState |
Encapsulates all required internal state to position the associated
TermsEnum without re-seeking.
|
TermStates |
|
TermStatistics |
Contains statistics for a specific term
|
TermStats |
Holder for per-term statistics.
|
TermStats |
|
TermsWithScoreCollector<DV> |
|
TermsWithScoreCollector.MV |
|
TermsWithScoreCollector.MV.Avg |
|
TermsWithScoreCollector.SV |
|
TermsWithScoreCollector.SV.Avg |
|
TermToBytesRefAttribute |
This attribute is requested by TermsHashPerField to index the contents.
|
TermVectorFilteredLeafReader |
A filtered LeafReader that only includes the terms that are also in a provided set of terms.
|
TermVectorFilteredLeafReader.TermsFilteredTerms |
|
TermVectorFilteredLeafReader.TermVectorFilteredTermsEnum |
|
TermVectorLeafReader |
Wraps a Terms with a LeafReader , typically from term vectors.
|
TermVectorOffsetStrategy |
Uses term vectors that contain offsets.
|
TermVectorsConsumer |
|
TermVectorsConsumerPerField |
|
TermVectorsConsumerPerField.TermVectorsPostingsArray |
|
TermVectorsFormat |
Controls the format of term vectors
|
TermVectorsReader |
Codec API for reading term vectors:
|
TermVectorsWriter |
|
TermVectorsWriter.TermVectorsMergeSub |
|
TermWeightor |
Calculates the weight of a Term
|
TernaryTree |
Ternary Search Tree.
|
TernaryTreeNode |
The class creates a TST node.
|
Tessellator |
Computes a triangular mesh tessellation for a given polygon.
|
Tessellator.Node |
Circular Doubly-linked list used for polygon coordinates
|
Tessellator.State |
state of the tessellated split - avoids recursion
|
Tessellator.Triangle |
Triangle in the tessellated mesh
|
TextableQueryNode |
Interface for a node that has text as a CharSequence
|
TextField |
A field that is indexed and tokenized, without term
vectors.
|
TextFragment |
Low-level class used to record information about a section of a document
with a score.
|
TFIDFSimilarity |
Implementation of Similarity with the Vector Space Model.
|
TFValueSource |
|
ThaiAnalyzer |
|
ThaiAnalyzer.DefaultSetHolder |
Atomically loads the DEFAULT_STOP_SET in a lazy fashion once the outer class
accesses the static final set the first time.;
|
ThaiTokenizer |
Tokenizer that use BreakIterator to tokenize Thai text.
|
ThaiTokenizerFactory |
|
ThreadInterruptedException |
Thrown by lucene on detecting that Thread.interrupt() had
been called.
|
TieredMergePolicy |
Merges segments of approximately equal size, subject to
an allowed number of segments per tier.
|
TieredMergePolicy.MERGE_TYPE |
|
TieredMergePolicy.MergeScore |
Holds score and explanation for a single candidate
merge.
|
TieredMergePolicy.SegmentSizeAndDocs |
|
TimeLimitingCollector |
The TimeLimitingCollector is used to timeout search requests that
take longer than the maximum allowed search time limit.
|
TimeLimitingCollector.TimeExceededException |
Thrown when elapsed search time exceeds allowed search time.
|
TimeLimitingCollector.TimerThread |
Thread used to timeout search requests.
|
TimeLimitingCollector.TimerThreadHolder |
|
TimSorter |
|
ToChildBlockJoinQuery |
Just like ToParentBlockJoinQuery , except this
query joins in reverse: you provide a Query matching
parent documents and it joins down to child
documents.
|
ToChildBlockJoinQuery.ToChildBlockJoinScorer |
|
ToChildBlockJoinQuery.ToChildBlockJoinWeight |
|
Token |
Analyzed token with morphological data from its dictionary.
|
Token |
Analyzed token with morphological data.
|
Token |
Describes the input token stream.
|
Token |
Describes the input token stream.
|
Token |
Describes the input token stream.
|
TokenFilter |
A TokenFilter is a TokenStream whose input is another TokenStream.
|
TokenFilterFactory |
Abstract parent class for analysis factories that create TokenFilter
instances.
|
TokenGroup |
One, or several overlapping tokens, along with the score(s) and the scope of
the original text.
|
TokenInfoDictionary |
Binary dictionary implementation for a known-word dictionary model:
Words are encoded into an FST mapping to a list of wordIDs.
|
TokenInfoDictionary |
Binary dictionary implementation for a known-word dictionary model:
Words are encoded into an FST mapping to a list of wordIDs.
|
TokenInfoDictionary.SingletonHolder |
|
TokenInfoDictionary.SingletonHolder |
|
TokenInfoDictionaryBuilder |
|
TokenInfoDictionaryBuilder |
|
TokenInfoDictionaryWriter |
|
TokenInfoDictionaryWriter |
|
TokenInfoFST |
Thin wrapper around an FST with root-arc caching for Japanese.
|
TokenInfoFST |
Thin wrapper around an FST with root-arc caching for Hangul syllables (11,172 arcs).
|
TokenizedPhraseQueryNode |
|
Tokenizer |
A Tokenizer is a TokenStream whose input is a Reader.
|
TokenizerFactory |
Abstract parent class for analysis factories that create Tokenizer
instances.
|
TokenMgrError |
Token Manager Error.
|
TokenMgrError |
Token Manager Error.
|
TokenMgrError |
Token Manager Error.
|
TokenOffsetPayloadTokenFilter |
|
TokenOffsetPayloadTokenFilterFactory |
|
TokenSources |
Convenience methods for obtaining a TokenStream for use with the Highlighter - can obtain from
term vectors with offsets and positions or from an Analyzer re-parsing the stored content.
|
TokenStream |
A TokenStream enumerates the sequence of tokens, either from
Field s of a Document or from query text.
|
TokenStreamFromTermVector |
TokenStream created from a term vector field.
|
TokenStreamFromTermVector.TokenLL |
|
TokenStreamOffsetStrategy |
Analyzes the text, producing a single OffsetsEnum wrapping the TokenStream filtered to terms
in the query, including wildcards.
|
TokenStreamOffsetStrategy.TokenStreamOffsetsEnum |
|
TokenStreamToAutomaton |
Consumes a TokenStream and creates an Automaton
where the transition labels are UTF8 bytes (or Unicode
code points if unicodeArcs is true) from the TermToBytesRefAttribute .
|
TokenStreamToAutomaton.Position |
|
TokenStreamToAutomaton.Positions |
|
TokenStreamToTermAutomatonQuery |
|
TooComplexToDeterminizeException |
This exception is thrown when determinizing an automaton would result in one
which has too many states.
|
TooManyBasicQueries |
|
ToParentBlockJoinQuery |
|
ToParentBlockJoinQuery.BlockJoinScorer |
|
ToParentBlockJoinQuery.BlockJoinWeight |
|
ToParentBlockJoinQuery.ParentApproximation |
|
ToParentBlockJoinQuery.ParentTwoPhase |
|
ToParentBlockJoinSortField |
A special sort field that allows sorting parent docs based on nested / child level fields.
|
ToParentDocValues |
|
ToParentDocValues.Accumulator |
|
ToParentDocValues.NumDV |
|
ToParentDocValues.SortedDVs |
|
TopDocs |
|
TopDocs.MergeSortQueue |
|
TopDocs.ScoreMergeSortQueue |
|
TopDocs.ShardRef |
|
TopDocsCollector<T extends ScoreDoc> |
A base class for all collectors that return a TopDocs output.
|
TopFieldCollector |
|
TopFieldCollector.MultiComparatorLeafCollector |
|
TopFieldCollector.PagingFieldCollector |
|
TopFieldCollector.SimpleFieldCollector |
|
TopFieldDocs |
|
TopGroups<T> |
Represents result returned by a grouping search.
|
TopGroups.ScoreMergeMode |
How the GroupDocs score (if any) should be merged.
|
TopGroupsCollector<T> |
A second-pass collector that collects the TopDocs for each group, and
returns them as a TopGroups object
|
TopGroupsCollector.MaxScoreCollector |
|
TopGroupsCollector.TopDocsAndMaxScoreCollector |
|
TopGroupsCollector.TopDocsReducer<T> |
|
TopScoreDocCollector |
A Collector implementation that collects the top-scoring hits,
returning them as a TopDocs .
|
TopScoreDocCollector.PagingTopScoreDocCollector |
|
TopScoreDocCollector.ScorerLeafCollector |
|
TopScoreDocCollector.SimpleTopScoreDocCollector |
|
TopSuggestDocs |
|
TopSuggestDocs.SuggestScoreDoc |
ScoreDoc with an
additional CharSequence key
|
TopSuggestDocsCollector |
Collector that collects completion and
score, along with document id
|
TopTermsRewrite<B> |
Base rewrite method for collecting only the top terms
via a priority queue.
|
TopTermsRewrite.ScoreTerm |
|
ToStringUtil |
Utility class for english translations of morphological data,
used only for debugging.
|
ToStringUtils |
Helper methods to ease implementing Object.toString() .
|
TotalHitCountCollector |
Just counts the total number of hits.
|
TotalHits |
Description of the total number of hits of a query.
|
TotalHits.Relation |
|
TotalTermFreqValueSource |
TotalTermFreqValueSource returns the total term freq
(sum of term freqs across all documents).
|
TrackingDirectoryWrapper |
A delegating Directory that records which files were
written to and deleted.
|
TrackingTmpOutputDirectoryWrapper |
|
Transition |
|
Trie |
A Trie is used to store a dictionary of words and their stems.
|
TrimFilter |
Trims leading and trailing whitespace from Tokens in the stream.
|
TrimFilterFactory |
|
TruncateTokenFilter |
A token filter for truncating the terms into a specific length.
|
TruncateTokenFilterFactory |
|
TSTAutocomplete |
Ternary Search Trie implementation.
|
TSTLookup |
|
TurkishAnalyzer |
|
TurkishAnalyzer.DefaultSetHolder |
Atomically loads the DEFAULT_STOP_SET in a lazy fashion once the outer class
accesses the static final set the first time.;
|
TurkishLowerCaseFilter |
Normalizes Turkish token text to lower case.
|
TurkishLowerCaseFilterFactory |
|
TurkishStemmer |
This class was automatically generated by a Snowball to Java compiler
It implements the stemming algorithm defined by a snowball script.
|
TwoPhaseCommit |
An interface for implementations that support 2-phase commit.
|
TwoPhaseCommitTool |
A utility for executing 2-phase commit on several objects.
|
TwoPhaseCommitTool.CommitFailException |
|
TwoPhaseCommitTool.PrepareCommitFailException |
|
TwoPhaseIterator |
|
TwoPhaseIterator.TwoPhaseIteratorAsDocIdSetIterator |
|
TypeAsPayloadTokenFilter |
|
TypeAsPayloadTokenFilterFactory |
|
TypeAsSynonymFilter |
|
TypeAsSynonymFilterFactory |
|
TypeAttribute |
A Token's lexical type.
|
TypeAttributeImpl |
|
TypeTokenFilter |
Removes tokens whose types appear in a set of blocked types from a token stream.
|
TypeTokenFilterFactory |
|
UAX29URLEmailAnalyzer |
|
UAX29URLEmailTokenizer |
This class implements Word Break rules from the Unicode Text Segmentation
algorithm, as specified in
Unicode Standard Annex #29
URLs and email addresses are also tokenized according to the relevant RFCs.
|
UAX29URLEmailTokenizerFactory |
|
UAX29URLEmailTokenizerImpl |
This class implements Word Break rules from the Unicode Text Segmentation
algorithm, as specified in
Unicode Standard Annex #29
URLs and email addresses are also tokenized according to the relevant RFCs.
|
UHComponents |
|
UnescapedCharSequence |
CharsSequence with escaped chars information.
|
UnicodeProps |
This file contains unicode properties used by various CharTokenizer s.
|
UnicodeUtil |
Class to encode java's UTF16 char[] into UTF8 byte[]
without always allocating a new byte[] as
String.getBytes(StandardCharsets.UTF_8) does.
|
UnicodeWhitespaceAnalyzer |
|
UnicodeWhitespaceTokenizer |
A UnicodeWhitespaceTokenizer is a tokenizer that divides text at whitespace.
|
UnifiedHighlighter |
|
UnifiedHighlighter.HighlightFlag |
Flags for controlling highlighting behavior.
|
UnifiedHighlighter.LimitedStoredFieldVisitor |
Fetches stored fields for highlighting.
|
UnifiedHighlighter.OffsetSource |
Source of term offsets; essential for highlighting.
|
UnifiedHighlighter.TermVectorReusingLeafReader |
|
UniformSplitPostingsFormat |
|
UniformSplitTerms |
Terms based on the Uniform Split technique.
|
UniformSplitTermsReader |
A block-based terms index and dictionary based on the Uniform Split technique.
|
UniformSplitTermsWriter |
A block-based terms index and dictionary that assigns terms to nearly
uniform length blocks.
|
UnionFieldMetadataBuilder |
|
UnknownDictionary |
Dictionary for unknown-word handling.
|
UnknownDictionary |
Dictionary for unknown-word handling.
|
UnknownDictionary.SingletonHolder |
|
UnknownDictionary.SingletonHolder |
|
UnknownDictionaryBuilder |
|
UnknownDictionaryBuilder |
|
UnknownDictionaryWriter |
|
UnknownDictionaryWriter |
|
UnorderedIntervalsSource |
|
UnorderedIntervalsSource.UnorderedIntervalIterator |
|
UnsortedInputIterator |
This wrapper buffers the incoming elements and makes sure they are in
random order.
|
UpgradeIndexMergePolicy |
|
UpperCaseFilter |
Normalizes token text to UPPER CASE.
|
UpperCaseFilterFactory |
|
UpToTwoPositiveIntOutputs |
An FST Outputs implementation where each output
is one or two non-negative long values.
|
UpToTwoPositiveIntOutputs.TwoLongs |
Holds two long outputs.
|
UsageTrackingQueryCachingPolicy |
A QueryCachingPolicy that tracks usage statistics of recently-used
filters in order to decide on which filters are worth caching.
|
UserDictionary |
Class for building a User Dictionary.
|
UserDictionary |
Class for building a User Dictionary.
|
UserInputQueryBuilder |
UserInputQueryBuilder uses 1 of 2 strategies for thread-safe parsing:
1) Synchronizing access to "parse" calls on a previously supplied QueryParser
or..
|
UTF32ToUTF8 |
Converts UTF-32 automata to the equivalent UTF-8 representation.
|
UTF32ToUTF8.UTF8Byte |
|
UTF32ToUTF8.UTF8Sequence |
|
Util |
Static helper methods.
|
Util.FSTPath<T> |
Represents a path in TopNSearcher.
|
Util.Result<T> |
|
Util.TieBreakByInputComparator<T> |
Compares first by the provided comparator, and then
tie breaks by path.input.
|
Util.TopNSearcher<T> |
Utility class to find top N shortest paths from start
point(s).
|
Util.TopResults<T> |
|
Utility |
SmartChineseAnalyzer utility constants and methods
|
ValueQueryNode<T> |
This interface should be implemented by QueryNode that holds an
arbitrary value.
|
ValueSource |
|
ValueSource.FromDoubleValuesSource |
|
ValueSource.ScoreAndDoc |
|
ValueSource.WrappedDoubleValuesSource |
|
ValueSource.WrappedLongValuesSource |
|
ValueSourceGroupSelector |
A GroupSelector that groups via a ValueSource
|
ValueSourceScorer |
|
VariableContext |
A helper to parse the context of a variable name, which is the base variable, followed by the
sequence of array (integer or string indexed) and member accesses.
|
VariableContext.Type |
Represents what a piece of a variable does.
|
VariableGapTermsIndexReader |
|
VariableGapTermsIndexReader.IndexEnum |
|
VariableGapTermsIndexWriter |
|
VariableGapTermsIndexWriter.EveryNOrDocFreqTermSelector |
Sets an index term when docFreq >= docFreqThresh, or
every interval terms.
|
VariableGapTermsIndexWriter.EveryNTermSelector |
|
VariableGapTermsIndexWriter.IndexTermSelector |
Hook for selecting which terms should be placed in the terms index.
|
VectorValueSource |
Converts individual ValueSource instances to leverage the FunctionValues *Val functions that work with multiple values,
i.e.
|
VerifyingLockFactory |
A LockFactory that wraps another LockFactory and verifies that each lock obtain/release
is "correct" (never results in two processes holding the
lock at the same time).
|
Version |
Use by certain classes to match version compatibility
across releases of Lucene.
|
VersionBlockTreeTermsReader |
|
VersionBlockTreeTermsWriter |
This is just like BlockTreeTermsWriter , except it also stores a version per term, and adds a method to its TermsEnum
implementation to seekExact only if the version is >= the specified version.
|
VersionBlockTreeTermsWriter.FieldMetaData |
|
VersionBlockTreeTermsWriter.PendingBlock |
|
VersionBlockTreeTermsWriter.PendingEntry |
|
VersionBlockTreeTermsWriter.PendingTerm |
|
VersionFieldReader |
BlockTree's implementation of Terms .
|
VirtualMethod<C> |
A utility for keeping backwards compatibility on previously abstract methods
(or similar replacements).
|
WANDScorer |
This implements the WAND (Weak AND) algorithm for dynamic pruning
described in "Efficient Query Evaluation using a Two-Level Retrieval
Process" by Broder, Carmel, Herscovici, Soffer and Zien.
|
WeakIdentityMap<K,V> |
Implements a combination of WeakHashMap and
IdentityHashMap .
|
WeakIdentityMap.IdentityWeakReference |
|
Weight |
Expert: Calculate query weights and build query scorers.
|
Weight.DefaultBulkScorer |
Just wraps a Scorer and performs top scoring using it.
|
WeightedFieldFragList |
|
WeightedFragListBuilder |
|
WeightedSpanTerm |
Lightweight class to hold term, weight, and positions used for scoring this
term.
|
WeightedSpanTermExtractor |
|
WeightedSpanTermExtractor.DelegatingLeafReader |
|
WeightedSpanTermExtractor.PositionCheckingMap<K> |
This class makes sure that if both position sensitive and insensitive
versions of the same term are added, the position insensitive one wins.
|
WeightedTerm |
Lightweight class to hold term and a weight value used for scoring this term
|
WFSTCompletionLookup |
Suggester based on a weighted FST: it first traverses the prefix,
then walks the n shortest paths to retrieve top-ranked
suggestions.
|
WFSTCompletionLookup.WFSTInputIterator |
|
WhitespaceAnalyzer |
|
WhitespaceTokenizer |
A tokenizer that divides text at whitespace characters as defined by
Character.isWhitespace(int) .
|
WhitespaceTokenizerFactory |
|
WholeBreakIterator |
Just produces one single fragment for the entire text
|
WikipediaTokenizer |
Extension of StandardTokenizer that is aware of Wikipedia syntax.
|
WikipediaTokenizerFactory |
|
WikipediaTokenizerImpl |
JFlex-generated tokenizer that is aware of Wikipedia syntax.
|
WildcardQuery |
Implements the wildcard search query.
|
WildcardQueryNode |
|
WildcardQueryNodeBuilder |
|
WildcardQueryNodeProcessor |
|
WindowsDirectory |
Native Directory implementation for Microsoft Windows.
|
WindowsDirectory.WindowsIndexInput |
|
WordBreakSpellChecker |
A spell checker whose sole function is to offer suggestions by combining
multiple terms into one word and/or breaking terms into multiple words.
|
WordBreakSpellChecker.BreakSuggestionSortMethod |
Determines the order to list word break suggestions
|
WordBreakSpellChecker.CombinationsThenFreqComparator |
|
WordBreakSpellChecker.CombineSuggestionWrapper |
|
WordBreakSpellChecker.LengthThenMaxFreqComparator |
|
WordBreakSpellChecker.LengthThenSumFreqComparator |
|
WordBreakSpellChecker.SuggestWordArrayWrapper |
|
WordDelimiterFilter |
Deprecated.
|
WordDelimiterFilterFactory |
Deprecated.
|
WordDelimiterGraphFilter |
Splits words into subwords and performs optional transformations on subword
groups, producing a correct token graph so that e.g.
|
WordDelimiterGraphFilterFactory |
|
WordDelimiterIterator |
A BreakIterator-like API for iterating over subwords in text, according to WordDelimiterGraphFilter rules.
|
WordDictionary |
SmartChineseAnalyzer Word Dictionary
|
WordlistLoader |
Loader for text files that represent a list of stopwords.
|
WordnetSynonymParser |
Parser for wordnet prolog format
|
WordSegmenter |
Segment a sentence of Chinese text into words.
|
WordType |
Internal SmartChineseAnalyzer token type constants
|
XYCircle |
Represents a circle on the XY plane.
|
XYDocValuesField |
An per-document location field.
|
XYDocValuesPointInGeometryQuery |
|
XYEncodingUtils |
reusable cartesian geometry encoding methods
|
XYGeometry |
Cartesian Geometry object.
|
XYLine |
Represents a line in cartesian space.
|
XYPoint |
Represents a point on the earth's surface.
|
XYPointDistanceComparator |
Compares documents by distance from an origin point
|
XYPointField |
An indexed XY position field.
|
XYPointInGeometryQuery |
Finds all previously indexed points that fall within the specified XY geometries.
|
XYPointSortField |
Sorts by distance from an origin location.
|
XYPolygon |
Represents a polygon in cartesian space.
|
XYRectangle |
Represents a x/y cartesian rectangle.
|
XYShape |
A cartesian shape utility class for indexing and searching geometries whose vertices are unitless x, y values.
|
XYShapeQuery |
|