public class HStore extends java.lang.Object implements Store, StoreConfigInformation, PropagatingConfigurationObserver
There's no reason to consider append-logging at this level; all logging and locking is handled at the HRegion level. Store just provides services to manage sets of StoreFiles. One of the most important of those services is compaction services where files are aggregated once they pass a configurable threshold.
Locking and transactions are handled at a higher level. This API should not be called directly but by an HRegion manager.
| Modifier and Type | Field and Description |
|---|---|
static java.lang.String |
BLOCK_STORAGE_POLICY_KEY |
static java.lang.String |
BLOCKING_STOREFILES_KEY |
protected int |
blocksize |
protected int |
bytesPerChecksum |
protected CacheConfig |
cacheConf |
protected ChecksumType |
checksumType
Checksum configuration
|
static java.lang.String |
COMPACTCHECKER_INTERVAL_MULTIPLIER_KEY |
protected CellComparator |
comparator |
protected Configuration |
conf |
protected Encryption.Context |
cryptoContext |
static long |
DEEP_OVERHEAD |
static java.lang.String |
DEFAULT_BLOCK_STORAGE_POLICY |
static int |
DEFAULT_BLOCKING_STOREFILE_COUNT |
static int |
DEFAULT_COMPACTCHECKER_INTERVAL_MULTIPLIER |
static long |
FIXED_OVERHEAD |
protected MemStore |
memstore |
static java.lang.String |
MEMSTORE_CLASS_NAME |
protected HRegion |
region |
NO_PRIORITY, PRIORITY_USER| Modifier | Constructor and Description |
|---|---|
protected |
HStore(HRegion region,
ColumnFamilyDescriptor family,
Configuration confParam)
Constructor
|
| Modifier and Type | Method and Description |
|---|---|
void |
add(Cell cell,
MemStoreSizing memstoreSizing)
Adds a value to the memstore
|
void |
add(java.lang.Iterable<Cell> cells,
MemStoreSizing memstoreSizing)
Adds the specified value to the memstore
|
void |
addChangedReaderObserver(ChangedReadersObserver o) |
boolean |
areWritesEnabled() |
void |
assertBulkLoadHFileOk(Path srcPath)
This throws a WrongRegionException if the HFile does not fit in this region, or an
InvalidHFileException if the HFile is not valid.
|
Path |
bulkLoadHFile(byte[] family,
java.lang.String srcPathStr,
Path dstPath) |
void |
bulkLoadHFile(StoreFileInfo fileInfo) |
void |
cancelRequestedCompaction(CompactionContext compaction) |
boolean |
canSplit()
Returns whether this store is splittable, i.e., no reference file in this store.
|
<any> |
close()
Close all the readers We don't need to worry about subsequent requests because the Region holds
a write lock that will prevent any more reads or writes.
|
void |
closeAndArchiveCompactedFiles()
Closes and archives the compacted files under this store
|
java.util.List<HStoreFile> |
compact(CompactionContext compaction,
ThroughputController throughputController,
User user)
Compact the StoreFiles.
|
void |
compactRecentForTestingAssumingDefaultPolicy(int N)
This method tries to compact N recent files for testing.
|
protected void |
completeCompaction(java.util.Collection<HStoreFile> compactedFiles)
It works by processing a compaction that's been written to disk.
|
protected void |
createCacheConf(ColumnFamilyDescriptor family)
Creates the cache config.
|
org.apache.hadoop.hbase.regionserver.StoreFlushContext |
createFlushContext(long cacheFlushId,
FlushLifeCycleTracker tracker) |
protected KeyValueScanner |
createScanner(Scan scan,
ScanInfo scanInfo,
java.util.NavigableSet<byte[]> targetCols,
long readPt) |
protected StoreEngine<?,?,?,?> |
createStoreEngine(HStore store,
Configuration conf,
CellComparator kvComparator)
Creates the store engine configured for the given Store.
|
protected HStoreFile |
createStoreFileAndReader(Path p) |
StoreFileWriter |
createWriterInTmp(long maxKeyCount,
Compression.Algorithm compression,
boolean isCompaction,
boolean includeMVCCReadpoint,
boolean includesTag,
boolean shouldDropBehind) |
void |
deleteChangedReaderObserver(ChangedReadersObserver o) |
void |
deregisterChildren(ConfigurationManager manager)
Needs to be called to deregister the children from the manager.
|
static long |
determineTTLFromFamily(ColumnFamilyDescriptor family) |
protected java.util.List<HStoreFile> |
doCompaction(CompactionRequestImpl cr,
java.util.Collection<HStoreFile> filesToCompact,
User user,
long compactionStartTime,
java.util.List<Path> newFiles) |
protected java.util.List<Path> |
flushCache(long logCacheFlushId,
MemStoreSnapshot snapshot,
MonitoredTask status,
ThroughputController throughputController,
FlushLifeCycleTracker tracker)
Write out current snapshot.
|
java.util.OptionalDouble |
getAvgStoreFileAge() |
long |
getBlockingFileCount()
The number of files required before flushes for this store will be blocked.
|
static int |
getBytesPerChecksum(Configuration conf)
Returns the configured bytesPerChecksum value.
|
CacheConfig |
getCacheConfig()
Used for tests.
|
static ChecksumType |
getChecksumType(Configuration conf)
Returns the configured checksum algorithm.
|
static int |
getCloseCheckInterval() |
ColumnFamilyDescriptor |
getColumnFamilyDescriptor() |
java.lang.String |
getColumnFamilyName() |
long |
getCompactedCellsCount() |
long |
getCompactedCellsSize() |
java.util.Collection<HStoreFile> |
getCompactedFiles() |
int |
getCompactedFilesCount() |
long |
getCompactionCheckMultiplier() |
double |
getCompactionPressure()
This value can represent the degree of emergency of compaction for this store.
|
CompactionProgress |
getCompactionProgress()
getter for CompactionProgress object
|
int |
getCompactPriority() |
CellComparator |
getComparator() |
RegionCoprocessorHost |
getCoprocessorHost() |
HFileDataBlockEncoder |
getDataBlockEncoder() |
FileSystem |
getFileSystem() |
MemStoreSize |
getFlushableSize() |
long |
getFlushedCellsCount() |
long |
getFlushedCellsSize() |
long |
getFlushedOutputFileSize() |
long |
getHFilesSize() |
HRegion |
getHRegion() |
long |
getLastCompactSize() |
long |
getMajorCompactedCellsCount() |
long |
getMajorCompactedCellsSize() |
java.util.OptionalLong |
getMaxMemStoreTS() |
java.util.OptionalLong |
getMaxSequenceId() |
java.util.OptionalLong |
getMaxStoreFileAge() |
long |
getMemStoreFlushSize() |
MemStoreSize |
getMemStoreSize() |
java.util.OptionalLong |
getMinStoreFileAge() |
long |
getNumHFiles() |
long |
getNumReferenceFiles() |
protected OffPeakHours |
getOffPeakHours() |
HRegionFileSystem |
getRegionFileSystem() |
RegionInfo |
getRegionInfo() |
ScanInfo |
getScanInfo() |
KeyValueScanner |
getScanner(Scan scan,
java.util.NavigableSet<byte[]> targetCols,
long readPt)
Return a scanner for both the memstore and the HStore files.
|
java.util.List<KeyValueScanner> |
getScanners(boolean cacheBlocks,
boolean isGet,
boolean usePread,
boolean isCompaction,
ScanQueryMatcher matcher,
byte[] startRow,
byte[] stopRow,
long readPt)
Get all scanners with no filtering based on TTL (that happens further down the line).
|
java.util.List<KeyValueScanner> |
getScanners(boolean cacheBlocks,
boolean usePread,
boolean isCompaction,
ScanQueryMatcher matcher,
byte[] startRow,
boolean includeStartRow,
byte[] stopRow,
boolean includeStopRow,
long readPt)
Get all scanners with no filtering based on TTL (that happens further down the line).
|
java.util.List<KeyValueScanner> |
getScanners(java.util.List<HStoreFile> files,
boolean cacheBlocks,
boolean isGet,
boolean usePread,
boolean isCompaction,
ScanQueryMatcher matcher,
byte[] startRow,
byte[] stopRow,
long readPt,
boolean includeMemstoreScanner)
Create scanners on the given files and if needed on the memstore with no filtering based on TTL
(that happens further down the line).
|
java.util.List<KeyValueScanner> |
getScanners(java.util.List<HStoreFile> files,
boolean cacheBlocks,
boolean usePread,
boolean isCompaction,
ScanQueryMatcher matcher,
byte[] startRow,
boolean includeStartRow,
byte[] stopRow,
boolean includeStopRow,
long readPt,
boolean includeMemstoreScanner)
Create scanners on the given files and if needed on the memstore with no filtering based on TTL
(that happens further down the line).
|
long |
getSize() |
long |
getSmallestReadPoint() |
MemStoreSize |
getSnapshotSize() |
java.util.Optional<byte[]> |
getSplitPoint()
Determines if Store should be split.
|
StoreEngine<?,?,?,?> |
getStoreEngine()
Returns the StoreEngine that is backing this concrete implementation of Store.
|
java.util.Collection<HStoreFile> |
getStorefiles() |
int |
getStorefilesCount() |
long |
getStorefilesRootLevelIndexSize() |
long |
getStorefilesSize() |
long |
getStoreFileTtl() |
static Path |
getStoreHomedir(Path tabledir,
RegionInfo hri,
byte[] family)
Deprecated.
|
static Path |
getStoreHomedir(Path tabledir,
java.lang.String encodedName,
byte[] family)
Deprecated.
|
long |
getStoreSizeUncompressed() |
TableName |
getTableName() |
long |
getTotalStaticBloomSize()
Returns the total byte size of all Bloom filter bit arrays.
|
long |
getTotalStaticIndexSize()
Returns the total size of all index blocks in the data block indexes, including the root level,
intermediate levels, and the leaf level for multi-level indexes, or just the root level for
single-level indexes.
|
boolean |
hasReferences() |
boolean |
hasTooManyStoreFiles() |
long |
heapSize() |
boolean |
isPrimaryReplicaStore() |
boolean |
isSloppyMemStore() |
boolean |
needsCompaction()
See if there's too much store files in this store
|
void |
onConfigurationChange(Configuration conf)
This method would be called by the
ConfigurationManager
object when the Configuration object is reloaded from disk. |
void |
postSnapshotOperation()
Perform tasks needed after the completion of snapshot operation.
|
<any> |
preBulkLoadHFile(java.lang.String srcPathStr,
long seqNum)
This method should only be called from Region.
|
java.lang.Long |
preFlushSeqIDEstimation() |
void |
preSnapshotOperation()
Sets the store up for a region level snapshot operation.
|
java.util.List<KeyValueScanner> |
recreateScanners(java.util.List<KeyValueScanner> currentFileScanners,
boolean cacheBlocks,
boolean usePread,
boolean isCompaction,
ScanQueryMatcher matcher,
byte[] startRow,
boolean includeStartRow,
byte[] stopRow,
boolean includeStopRow,
long readPt,
boolean includeMemstoreScanner)
Recreates the scanners on the current list of active store file scanners
|
void |
refreshStoreFiles()
Checks the underlying store files, and opens the files that have not been opened, and removes
the store file readers for store files no longer available.
|
void |
refreshStoreFiles(java.util.Collection<java.lang.String> newFiles)
Replaces the store files that the store has with the given files.
|
void |
registerChildren(ConfigurationManager manager)
Needs to be called to register the children to the manager.
|
void |
replayCompactionMarker(CompactionDescriptor compaction,
boolean pickCompactionFiles,
boolean removeFiles)
Call to complete a compaction.
|
java.util.Optional<CompactionContext> |
requestCompaction() |
java.util.Optional<CompactionContext> |
requestCompaction(int priority,
CompactionLifeCycleTracker tracker,
User user) |
boolean |
shouldPerformMajorCompaction()
Tests whether we should run a major compaction.
|
void |
startReplayingFromWAL()
This message intends to inform the MemStore that next coming updates
are going to be part of the replaying edits from WAL
|
void |
stopReplayingFromWAL()
This message intends to inform the MemStore that the replaying edits from WAL
are done
|
boolean |
throttleCompaction(long compactionSize) |
long |
timeOfOldestEdit()
When was the last edit done in the memstore
|
java.lang.String |
toString() |
void |
triggerMajorCompaction() |
void |
upsert(java.lang.Iterable<Cell> cells,
long readpoint,
MemStoreSizing memstoreSizing)
Adds or replaces the specified KeyValues.
|
public static final java.lang.String MEMSTORE_CLASS_NAME
public static final java.lang.String COMPACTCHECKER_INTERVAL_MULTIPLIER_KEY
public static final java.lang.String BLOCKING_STOREFILES_KEY
public static final java.lang.String BLOCK_STORAGE_POLICY_KEY
public static final java.lang.String DEFAULT_BLOCK_STORAGE_POLICY
public static final int DEFAULT_COMPACTCHECKER_INTERVAL_MULTIPLIER
public static final int DEFAULT_BLOCKING_STOREFILE_COUNT
protected final MemStore memstore
protected final HRegion region
protected Configuration conf
protected CacheConfig cacheConf
protected final int blocksize
protected ChecksumType checksumType
protected int bytesPerChecksum
protected final CellComparator comparator
protected Encryption.Context cryptoContext
public static final long FIXED_OVERHEAD
public static final long DEEP_OVERHEAD
protected HStore(HRegion region, ColumnFamilyDescriptor family, Configuration confParam) throws java.io.IOException
region - family - HColumnDescriptor for this columnconfParam - configuration object
failed. Can be null.java.io.IOExceptionprotected void createCacheConf(ColumnFamilyDescriptor family)
family - The current column family.protected StoreEngine<?,?,?,?> createStoreEngine(HStore store, Configuration conf, CellComparator kvComparator) throws java.io.IOException
store - The store. An unfortunate dependency needed due to it
being passed to coprocessors via the compactor.conf - Store configuration.kvComparator - KVComparator for storeFileManager.java.io.IOExceptionpublic static long determineTTLFromFamily(ColumnFamilyDescriptor family)
family - public java.lang.String getColumnFamilyName()
getColumnFamilyName in interface Storepublic TableName getTableName()
getTableName in interface Storepublic FileSystem getFileSystem()
getFileSystem in interface Storepublic HRegionFileSystem getRegionFileSystem()
public long getStoreFileTtl()
getStoreFileTtl in interface StoreConfigInformationpublic long getMemStoreFlushSize()
getMemStoreFlushSize in interface StoreConfigInformationpublic MemStoreSize getFlushableSize()
getFlushableSize in interface StoreStore.getMemStoreSize() unless we are carrying snapshots and then it will be the size of
outstanding snapshots.public MemStoreSize getSnapshotSize()
getSnapshotSize in interface Storepublic long getCompactionCheckMultiplier()
getCompactionCheckMultiplier in interface StoreConfigInformationpublic long getBlockingFileCount()
StoreConfigInformationgetBlockingFileCount in interface StoreConfigInformationpublic static int getBytesPerChecksum(Configuration conf)
conf - The configurationpublic static ChecksumType getChecksumType(Configuration conf)
conf - The configurationpublic static int getCloseCheckInterval()
public ColumnFamilyDescriptor getColumnFamilyDescriptor()
getColumnFamilyDescriptor in interface Storepublic java.util.OptionalLong getMaxSequenceId()
getMaxSequenceId in interface Storepublic java.util.OptionalLong getMaxMemStoreTS()
getMaxMemStoreTS in interface Store@Deprecated
public static Path getStoreHomedir(Path tabledir,
RegionInfo hri,
byte[] family)
tabledir - Path to where the table is being storedhri - RegionInfo for the region.family - ColumnFamilyDescriptor describing the column family@Deprecated
public static Path getStoreHomedir(Path tabledir,
java.lang.String encodedName,
byte[] family)
tabledir - Path to where the table is being storedencodedName - Encoded region name.family - ColumnFamilyDescriptor describing the column familypublic HFileDataBlockEncoder getDataBlockEncoder()
public void refreshStoreFiles()
throws java.io.IOException
StorerefreshStoreFiles in interface Storejava.io.IOExceptionpublic void refreshStoreFiles(java.util.Collection<java.lang.String> newFiles)
throws java.io.IOException
java.io.IOExceptionprotected HStoreFile createStoreFileAndReader(Path p)
throws java.io.IOException
java.io.IOExceptionpublic void startReplayingFromWAL()
public void stopReplayingFromWAL()
public void add(Cell cell,
MemStoreSizing memstoreSizing)
public void add(java.lang.Iterable<Cell> cells,
MemStoreSizing memstoreSizing)
public long timeOfOldestEdit()
StoretimeOfOldestEdit in interface Storepublic java.util.Collection<HStoreFile> getStorefiles()
getStorefiles in interface Storepublic java.util.Collection<HStoreFile> getCompactedFiles()
getCompactedFiles in interface Storepublic void assertBulkLoadHFileOk(Path srcPath)
throws java.io.IOException
java.io.IOExceptionpublic <any> preBulkLoadHFile(java.lang.String srcPathStr,
long seqNum)
throws java.io.IOException
srcPathStr - seqNum - sequence Id associated with the HFilejava.io.IOExceptionpublic Path bulkLoadHFile(byte[] family,
java.lang.String srcPathStr,
Path dstPath)
throws java.io.IOException
java.io.IOExceptionpublic void bulkLoadHFile(StoreFileInfo fileInfo) throws java.io.IOException
java.io.IOExceptionpublic <any> close()
throws java.io.IOException
StoreFiles that were previously being used.java.io.IOException - on failureprotected java.util.List<Path> flushCache(long logCacheFlushId,
MemStoreSnapshot snapshot,
MonitoredTask status,
ThroughputController throughputController,
FlushLifeCycleTracker tracker)
throws java.io.IOException
snapshot() has been called previously.logCacheFlushId - flush sequence numbersnapshot - status - throughputController - java.io.IOException - if exception occurs during processpublic StoreFileWriter createWriterInTmp(long maxKeyCount, Compression.Algorithm compression, boolean isCompaction, boolean includeMVCCReadpoint, boolean includesTag, boolean shouldDropBehind) throws java.io.IOException
maxKeyCount - compression - Compression algorithm to useisCompaction - whether we are creating a new file in a compactionincludeMVCCReadpoint - - whether to include MVCC or notincludesTag - - includesTag or notjava.io.IOExceptionpublic java.util.List<KeyValueScanner> getScanners(boolean cacheBlocks, boolean isGet, boolean usePread, boolean isCompaction, ScanQueryMatcher matcher, byte[] startRow, byte[] stopRow, long readPt) throws java.io.IOException
cacheBlocks - cache the blocks or notusePread - true to use pread, false if notisCompaction - true if the scanner is created for compactionmatcher - the scan query matcherstartRow - the start rowstopRow - the stop rowreadPt - the read point of the current scanjava.io.IOExceptionpublic java.util.List<KeyValueScanner> getScanners(boolean cacheBlocks, boolean usePread, boolean isCompaction, ScanQueryMatcher matcher, byte[] startRow, boolean includeStartRow, byte[] stopRow, boolean includeStopRow, long readPt) throws java.io.IOException
cacheBlocks - cache the blocks or notusePread - true to use pread, false if notisCompaction - true if the scanner is created for compactionmatcher - the scan query matcherstartRow - the start rowincludeStartRow - true to include start row, false if notstopRow - the stop rowincludeStopRow - true to include stop row, false if notreadPt - the read point of the current scanjava.io.IOExceptionpublic java.util.List<KeyValueScanner> getScanners(java.util.List<HStoreFile> files, boolean cacheBlocks, boolean isGet, boolean usePread, boolean isCompaction, ScanQueryMatcher matcher, byte[] startRow, byte[] stopRow, long readPt, boolean includeMemstoreScanner) throws java.io.IOException
files - the list of files on which the scanners has to be createdcacheBlocks - cache the blocks or notusePread - true to use pread, false if notisCompaction - true if the scanner is created for compactionmatcher - the scan query matcherstartRow - the start rowstopRow - the stop rowreadPt - the read point of the current scanincludeMemstoreScanner - true if memstore has to be includedjava.io.IOExceptionpublic java.util.List<KeyValueScanner> getScanners(java.util.List<HStoreFile> files, boolean cacheBlocks, boolean usePread, boolean isCompaction, ScanQueryMatcher matcher, byte[] startRow, boolean includeStartRow, byte[] stopRow, boolean includeStopRow, long readPt, boolean includeMemstoreScanner) throws java.io.IOException
files - the list of files on which the scanners has to be createdcacheBlocks - ache the blocks or notusePread - true to use pread, false if notisCompaction - true if the scanner is created for compactionmatcher - the scan query matcherstartRow - the start rowincludeStartRow - true to include start row, false if notstopRow - the stop rowincludeStopRow - true to include stop row, false if notreadPt - the read point of the current scanincludeMemstoreScanner - true if memstore has to be includedjava.io.IOExceptionpublic void addChangedReaderObserver(ChangedReadersObserver o)
o - Observer who wants to know about changes in set of Readerspublic void deleteChangedReaderObserver(ChangedReadersObserver o)
o - Observer no longer interested in changes in set of Readers.public java.util.List<HStoreFile> compact(CompactionContext compaction, ThroughputController throughputController, User user) throws java.io.IOException
During this time, the Store can work as usual, getting values from StoreFiles and writing new StoreFiles from the memstore. Existing StoreFiles are not destroyed until the new compacted StoreFile is completely written-out to disk.
The compactLock prevents multiple simultaneous compactions. The structureLock prevents us from interfering with other write operations.
We don't want to hold the structureLock for the whole time, as a compact() can be lengthy and we want to allow cache-flushes during this period.
Compaction event should be idempotent, since there is no IO Fencing for the region directory in hdfs. A region server might still try to complete the compaction after it lost the region. That is why the following events are carefully ordered for a compaction: 1. Compaction writes new files under region/.tmp directory (compaction output) 2. Compaction atomically moves the temporary file under region directory 3. Compaction appends a WAL edit containing the compaction input and output files. Forces sync on WAL. 4. Compaction deletes the input files from the region directory. Failure conditions are handled like this: - If RS fails before 2, compaction wont complete. Even if RS lives on and finishes the compaction later, it will only write the new data file to the region directory. Since we already have this data, this will be idempotent but we will have a redundant copy of the data. - If RS fails between 2 and 3, the region will have a redundant copy of the data. The RS that failed won't be able to finish snyc() for WAL because of lease recovery in WAL. - If RS fails after 3, the region region server who opens the region will pick up the the compaction marker from the WAL and replay it by removing the compaction input files. Failed RS can also attempt to delete those files, but the operation will be idempotent See HBASE-2231 for details.
compaction - compaction details obtained from requestCompaction()java.io.IOExceptionprotected java.util.List<HStoreFile> doCompaction(CompactionRequestImpl cr, java.util.Collection<HStoreFile> filesToCompact, User user, long compactionStartTime, java.util.List<Path> newFiles) throws java.io.IOException
java.io.IOExceptionpublic void replayCompactionMarker(CompactionDescriptor compaction,
boolean pickCompactionFiles,
boolean removeFiles)
throws java.io.IOException
compaction - java.io.IOExceptionpublic void compactRecentForTestingAssumingDefaultPolicy(int N)
throws java.io.IOException
N - Number of files.java.io.IOExceptionpublic boolean hasReferences()
hasReferences in interface Storetrue if the store has any underlying reference files to older HFilespublic CompactionProgress getCompactionProgress()
public boolean shouldPerformMajorCompaction()
throws java.io.IOException
StoreshouldPerformMajorCompaction in interface Storejava.io.IOExceptionpublic java.util.Optional<CompactionContext> requestCompaction() throws java.io.IOException
java.io.IOExceptionpublic java.util.Optional<CompactionContext> requestCompaction(int priority, CompactionLifeCycleTracker tracker, User user) throws java.io.IOException
java.io.IOExceptionpublic void cancelRequestedCompaction(CompactionContext compaction)
protected void completeCompaction(java.util.Collection<HStoreFile> compactedFiles)
throws java.io.IOException
It works by processing a compaction that's been written to disk.
It is usually invoked at the end of a compaction, but might also be invoked at HStore startup, if the prior execution died midway through.
Moving the compacted TreeMap into place means:
1) Unload all replaced StoreFile, close and collect list to delete. 2) Compute new store size
compactedFiles - list of files that were compactedjava.io.IOExceptionpublic boolean canSplit()
Storepublic java.util.Optional<byte[]> getSplitPoint()
public long getLastCompactSize()
getLastCompactSize in interface Storepublic long getSize()
public void triggerMajorCompaction()
public KeyValueScanner getScanner(Scan scan, java.util.NavigableSet<byte[]> targetCols, long readPt) throws java.io.IOException
scan - Scan to apply when scanning the storestargetCols - columns to scanjava.io.IOException - on failureprotected KeyValueScanner createScanner(Scan scan, ScanInfo scanInfo, java.util.NavigableSet<byte[]> targetCols, long readPt) throws java.io.IOException
java.io.IOExceptionpublic java.util.List<KeyValueScanner> recreateScanners(java.util.List<KeyValueScanner> currentFileScanners, boolean cacheBlocks, boolean usePread, boolean isCompaction, ScanQueryMatcher matcher, byte[] startRow, boolean includeStartRow, byte[] stopRow, boolean includeStopRow, long readPt, boolean includeMemstoreScanner) throws java.io.IOException
currentFileScanners - the current set of active store file scannerscacheBlocks - cache the blocks or notusePread - use pread or notisCompaction - is the scanner for compactionmatcher - the scan query matcherstartRow - the scan's start rowincludeStartRow - should the scan include the start rowstopRow - the scan's stop rowincludeStopRow - should the scan include the stop rowreadPt - the read point of the current scaneincludeMemstoreScanner - whether the current scanner should include memstorescannerjava.io.IOExceptionpublic java.lang.String toString()
toString in class java.lang.Objectpublic int getStorefilesCount()
getStorefilesCount in interface Storepublic int getCompactedFilesCount()
getCompactedFilesCount in interface Storepublic java.util.OptionalLong getMaxStoreFileAge()
getMaxStoreFileAge in interface Storepublic java.util.OptionalLong getMinStoreFileAge()
getMinStoreFileAge in interface Storepublic java.util.OptionalDouble getAvgStoreFileAge()
getAvgStoreFileAge in interface Storepublic long getNumReferenceFiles()
getNumReferenceFiles in interface Storepublic long getNumHFiles()
getNumHFiles in interface Storepublic long getStoreSizeUncompressed()
getStoreSizeUncompressed in interface Storepublic long getStorefilesSize()
getStorefilesSize in interface Storepublic long getHFilesSize()
getHFilesSize in interface Storepublic long getStorefilesRootLevelIndexSize()
getStorefilesRootLevelIndexSize in interface Storepublic long getTotalStaticIndexSize()
StoregetTotalStaticIndexSize in interface Storepublic long getTotalStaticBloomSize()
StoregetTotalStaticBloomSize in interface Storepublic MemStoreSize getMemStoreSize()
getMemStoreSize in interface Storepublic int getCompactPriority()
getCompactPriority in interface Storepublic boolean throttleCompaction(long compactionSize)
public HRegion getHRegion()
public RegionCoprocessorHost getCoprocessorHost()
public RegionInfo getRegionInfo()
getRegionInfo in interface Storepublic boolean areWritesEnabled()
areWritesEnabled in interface Storepublic long getSmallestReadPoint()
getSmallestReadPoint in interface Storepublic void upsert(java.lang.Iterable<Cell> cells,
long readpoint,
MemStoreSizing memstoreSizing)
throws java.io.IOException
For each KeyValue specified, if a cell with the same row, family, and qualifier exists in MemStore, it will be replaced. Otherwise, it will just be inserted to MemStore.
This operation is atomic on each KeyValue (row/family/qualifier) but not necessarily atomic across all of them.
readpoint - readpoint below which we can safely remove duplicate KVsjava.io.IOExceptionpublic org.apache.hadoop.hbase.regionserver.StoreFlushContext createFlushContext(long cacheFlushId,
FlushLifeCycleTracker tracker)
public boolean needsCompaction()
StoreneedsCompaction in interface Storetrue if number of store files is greater than the number defined in
minFilesToCompactpublic CacheConfig getCacheConfig()
public long heapSize()
public CellComparator getComparator()
getComparator in interface Storepublic ScanInfo getScanInfo()
public boolean hasTooManyStoreFiles()
hasTooManyStoreFiles in interface Storepublic long getFlushedCellsCount()
getFlushedCellsCount in interface Storepublic long getFlushedCellsSize()
getFlushedCellsSize in interface Storepublic long getFlushedOutputFileSize()
getFlushedOutputFileSize in interface Storepublic long getCompactedCellsCount()
getCompactedCellsCount in interface Storepublic long getCompactedCellsSize()
getCompactedCellsSize in interface Storepublic long getMajorCompactedCellsCount()
getMajorCompactedCellsCount in interface Storepublic long getMajorCompactedCellsSize()
getMajorCompactedCellsSize in interface Storepublic StoreEngine<?,?,?,?> getStoreEngine()
StoreEngine object used internally inside this HStore object.protected OffPeakHours getOffPeakHours()
public void onConfigurationChange(Configuration conf)
ConfigurationManager
object when the Configuration object is reloaded from disk.onConfigurationChange in interface ConfigurationObserverpublic void registerChildren(ConfigurationManager manager)
registerChildren in interface PropagatingConfigurationObservermanager - : to register topublic void deregisterChildren(ConfigurationManager manager)
deregisterChildren in interface PropagatingConfigurationObservermanager - : to deregister frompublic double getCompactionPressure()
StoreAnd for striped stores, we should calculate this value by the files in each stripe separately and return the maximum value.
It is similar to Store.getCompactPriority() except that it is more suitable to use in a
linear formula.
getCompactionPressure in interface Storepublic boolean isPrimaryReplicaStore()
isPrimaryReplicaStore in interface Storepublic void preSnapshotOperation()
postSnapshotOperation()public void postSnapshotOperation()
preSnapshotOperation()public void closeAndArchiveCompactedFiles()
throws java.io.IOException
java.io.IOExceptionpublic java.lang.Long preFlushSeqIDEstimation()
public boolean isSloppyMemStore()
isSloppyMemStore in interface Store