hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Geovanie Marquez <geovanie.marq...@gmail.com>
Subject Re: Aggresive compactions
Date Sun, 10 Aug 2014 18:48:56 GMT
The default: 1.2F

For minor compaction, this ratio is used to determine whether a given
StoreFile which is larger than hbase.hstore.compaction.min.size is eligible
for compaction. Its effect is to limit compaction of large StoreFiles. The
value of hbase.hstore.compaction.ratio is expressed as a floating-point
decimal. A large ratio, such as 10, will produce a single giant StoreFile.
Conversely, a low value, such as .25, will produce behavior similar to the
BigTable compaction algorithm, producing four StoreFiles. A moderate value
of between 1.0 and 1.4 is recommended. When tuning this value, you are
balancing write costs with read costs. Raising the value (to something like
1.4) will have more write costs, because you will compact larger
StoreFiles. However, during reads, HBase will need to seek through fewer
StoreFiles to accomplish the read. Consider this approach if you cannot
take advantage of Bloom filters. Otherwise, you can lower this value to
something like 1.0 to reduce the background cost of writes, and use Bloom
filters to control the number of StoreFiles touched during reads. For most
cases, the default value is appropriate.

So this is what may be going on: the files that I have available for
compaction are lager than my default hbase.hstore.compaction.min.size so
they are not compacted during a (not manual) minor_compaction event. If I
want aggressiveness from my cluster during these events I can place a
larger value here so that the server includes all files for all

On Sun, Aug 10, 2014 at 11:46 AM, Ted Yu <yuzhihong@gmail.com> wrote:

> What is the value for the config parameter 'hbase.hstore.compaction.ratio'
> ?
> Thanks
> On Sun, Aug 10, 2014 at 7:17 AM, Geovanie Marquez <
> geovanie.marquez@gmail.com> wrote:
> > I notice that when I have a regions with store file counts greater
> > than hbase.hstore.blockingStoreFiles,
> > on cluster startup the number drops dramatically under this value to just
> > under the blockingStoreFile parameter value in a relative short amount of
> > time and then it stalls and doesn't fall more (as aggressively). i.e. if
> > the value was 200 it drops to 198 and just stays there.
> >
> > I'd like to make my compactions aggressive for a limited time while I
> run a
> > job for massive deletes. How could I accomplish this?
> >
> > Is there a setting for allocating more resources to compactions. Assumimg
> > there is nothing else running on the cluster at this time.
> >

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message