All Classes and Interfaces

Class
Description
Create floating sub contexts from a caller context and commit when they reach time/write quota.
A floating window (agile) context - create sub contexts and commit them as they reach their time/size quota.
A non-agile context - plainly use caller's context as context and never commit.
A CJK Analyzer which applies a minimum and maximum token length to non-CJK tokens.
A Length Filter which ignores non-alphanumeric types.
A TokenFilterFactory that creates Alphanumeric Length filters.
Choose an Analyzer.
An analyzer that is used to analyze the auto_complete input.
This version assumes longs--want to switch based on Point type (either long or int) as future work.
A subclass of PointsConfig to allow the Parser the ability to translate boolean terms to binary ones.
An analyzer that forms unigrams of CJK terms.
a mixin interface for common functionality of parsers, it provides the ability to construct queries having typing information in hand via PointsConfig.
An analyzer that can handle emails, CJK, and synonyms.
Factory to build index and query Analyzer for EmailCjkSynonymAnalyzer.
An IndexInput to go with EmptyIndexOutput.
An output that is used to cause file references to exist, but doesn't actually allow writing.
Abstract class for synonym map config for English.
Authoritative only synonym map config for English.
Expanded synonym map config for English.
Effectively a no-op analyzer.
Constructs a new instance of ExactTokenAnalyzer.
Directory implementation backed by FDB which attempts to model a file system on top of FoundationDB.
Produce a lock over FDBDirectory.
 
An exception class thrown when obtaining the lock failed.
A transaction-scoped manager of FDBDirectory objects.
A shared cache for a single FDBDirectory.
A cache for FDBDirectory blocks that can be shared between record contexts.
Builder for FDBDirectorySharedCacheManager.
Utilities for standardizing some interactions with FDBDirectory.
Wrapper containing an FDBDirectory and cached accessor objects (like IndexWriters).
Class that handles reading data cut into blocks (KeyValue) backed by an FDB keyspace.
Implementation of IndexOutput representing the writing of data in Lucene to a file.
A File Reference record laying out the id, size, and block size.
Class for encapsulating logic around storing FieldInfos, via LuceneOptimizedFieldInfosFormat in FDBDirectory, and caching the data for the life of the directory.
Representation of a Single pass of a highlighter.
Similar to LazyOpener, but that is also Closeable.
Class to lazily "open" something that may throw an IOException when opening.
A function that returns an object, but may throw an IOException.
The "legacy" stored fields reader implementation - this one wraps around the Lucene default implementation and provides lazy initialization.
Provide a combination of analyzers for multiple fields of one Lucene index.
Each implementation of Analyzer should have its own implementation of this factory interface to provide instances of the analyzers for indexing and query to a LuceneAnalyzerRegistry.
Registry for AnalyzerChoosers.
Default implementation of the LuceneAnalyzerRegistry.
The type used to determine how the Analyzer built by LuceneAnalyzerFactory is used.
A wrapper for Analyzer and its unique identifier.
Factory to build index and query Analyzer for auto-complete suggestions.
This class provides some helpers for auto-complete functionality using Lucene auto complete suggestion lookup.
Helper class to capture token information synthesized from a search key.
Auto complete query clause from string using Lucene search syntax.
Binder for a conjunction of other clauses.
Wrapper of a Lucene Query that contains accessible field name, comparison type, and comparand.
Utility class for methods related to synchronizing Futures.
An exception that is thrown when the async to sync operation times out.
Helper class for converting FDBRecords to Lucene documents.
 
 
A RecordSource based on an FDBRecord.
A StoreTimer events associated with Lucene operations.
Count events.
Detail events.
Main events.
Size Events.
Wait events.
Utility class for converting Lucene Exceptions to/from Record layer ones.
A Wrapper around the transaction-too-old exception that gets thrown through Lucene as an IOException.
Lucene function key expressions.
The key function for Lucene field configuration.
The lucene_field_name key function.
Key function representing one of the Lucene built-in sorting techniques.
The lucene_sorted key function.
The lucene_stored key function.
The lucent_text key function.
Implemention of Lucene index key functions.
Key function names for Lucene indexes.
Get metadata information about a given lucene index.
Helper class for highlighting search matches.
The root expression of a LUCENE index specifies how select fields of a record are mapped to fields of a Lucene document.
An actual document / document meta-data.
Information about how a document field is derived from a record field.
Possible types for document fields.
An actual record / record meta-data.
A class that serializes the Index primary keys according to a format.
A utility class to build a partial record for an auto-complete suggestion value, with grouping keys if there exist.
The copier to populate the lucene auto complete suggestion as a value for the field where it is indexed from.
Deserializer.
Index maintainer for Lucene Indexes backed by FDB.
Index Maintainer Factory for Lucene Indexes.
Options for use with Lucene indexes.
Lucene query plan for including search-related scan parameters.
Deserializer.
Index Scrubbing Toolbox for a Lucene index maintainer.
Provide a lucene specific reason for detecting a "missing" index entry.
Lucene query plan that allows to make spell-check suggestions.
An index on the tokens in a text field.
Validator for Lucene indexes.
Record Layer's implementation of InfoStream that publishes messages as TRACE logs.
Lucene specific logging keys.
Metadata information about a lucene index, in response to LuceneGetMetadataInfo.
Information about an individual Lucene directory.
Binder for a negation of clauses.
Codec with a few optimizations for speeding up compound files sitting on FoundationDB.
Wrapper for the Lucene50CompoundFormat to optimize compound files for sitting on FoundationDB.
Class for accessing a compound stream.
This class provides a Lazy reader implementation to limit the amount of data needed to be read from FDB.
FieldInfosFormat optimized for storage in the FDBDirectory.
This class optimizes the current IndexSearcher and attempts to perform operations in parallel in places where data access can occur.
Lazy Reads the LiveDocsFormat to limit the amount of bytes returned from FDB.
Optimized MultiFieldQueryParser that adds the slop for SpanNearQuery as well.
A QueryParser that changes the way by which stop words in the query are handled.
Lazy Reads the PointsFormat to limit the amount of bytes returned from FDB.
PostingsFormat optimized for FDB storage.
Concrete class that reads docId(maybe frq,pos,offset,payloads) list with postings format.
Optimized QueryParser that adds the slop for SpanNearQuery as well.
A QueryParser that changes the way by which stop words in the query are handled.
This class provides a custom KeyValue based reader and writer implementation to limit the amount of data needed to be read from FDB.
A StoredFieldsReader implementation for Stored Fields stored in the DB.
An implementation of StoredFieldsWriter for fields stored in the DB.
Manage partitioning info for a logical, partitioned lucene index, in which each partition is a separate physical lucene index.
encapsulate and manage additional log messages when repartitioning.
A planner to implement lucene query planning so that we can isolate the lucene functionality to a distinct package.
Maintain a B-tree index of primary key to segment and doc id.
Maintain a B-tree index of primary key to segment and doc id.
Hook for getting back segment info during merge.
Maintain a B-tree index of primary key to segment and doc id.
Binder for a single query clause.
Helper class to capture a bound query, i.e.
A Query Component for Lucene that wraps the query supplied.
Query clause using a Comparisons.Comparison against a document field.
Query clause from string using Lucene search syntax.
A factory implementation for Query Parsers.
The provider for the implementations of LuceneQueryParserFactory.
The default implementation is a ConfigAwareQueryParser with the default list of stop words.
Query clause from string using Lucene search syntax.
The type of component.
The list of RecordLayerPropertyKey for configuration of the lucene indexing for a FDBRecordContext.
This class is a Record Cursor implementation for Lucene queries.
An IndexEntry based off a Lucene ScoreDoc.
Manage repartitioning details (merging small partitions and splitting large ones).
Convenience collection of data needed for repartitioning.
Base class for IndexScanBounds used by LUCENE indexes.
Base class for IndexScanParameters used by LUCENE indexes.
Scan a LUCENE index using a Lucene Query.
Scan parameters for making a LuceneScanQuery.
Deserializer.
The parameters for highlighting matching terms of a Lucene search.
Scan a LUCENE index for auto-complete suggestions.
Scan parameters for making a LuceneScanSpellCheck.
Deserializer.
IndexScanTypes for Lucene.
Serialize a Lucene directory block to/from an FDB key-value byte array.
Cursor over Lucene spell-check query results.
An NGRAM analyzer.
Factory for NgramAnalyzer.
An Index Input that attempts to keep a 10 block buffer in front or at the read at a minimum.
The goal of this Helper class is to extract terms from queries.
Utilities for use in using query parsers.
Exception thrown when encountering issues serializing IDs using LuceneIndexKeySerializer.
Exception thrown when the RecordIdFormat size exceeds the maximum.
The format model for the index key formatter.
The enum of the various types available for the format.
A collection of elements matching a Tuple structure in the key.
A simple parser for the string that represents the RecordIdFormat.
A SynonymGraphFilterFactory which uses an underlying Registry to statically cache synonym mappings, which is _significantly_ more efficient when using lots of distinct analyzers (such as during highlighting, or with lots of parallel record stores).
A PassageFormatter which keeps a whole number of words before and after matching text entries to provide context, and inserts ellipses in between matched strings to form a summarized text.
An optimization to open the segment readers in parallel when opening a directory.
The analyzer for index with synonym enabled.
An analyzer factory including synonym tokenizing on both index time and query time.
An analyzer factory including in fly synonym tokenizing on query time.
Configuration of the synonym map, which is built from a file.
Registry for SynonymAnalyzers.
Registry for SynonymMaps.
A PassageFormatter which creates Highlighted terms that contain the whole context of the field as the summarized text (i.e.