hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Thanh Do <than...@cs.wisc.edu>
Subject Re: Downside of too many HFiles
Date Wed, 12 Jun 2013 16:34:58 GMT
you may run into OOM when doing compaction.

On Wed, Jun 12, 2013 at 10:14 AM, Rahul Ravindran <rahulrv@yahoo.com> wrote:

> Hello,
> I am trying to understand the downsides of having a large number of hfiles
> by having a large hbase.hstore.compactionThreshold
>   This delays major compaction. However, the amount of data that needs to
> be read and re-written as a single hfile during major compaction will
> remain the same unless we have large number of deletes or expired rows
> I understand the random reads will be affected since each hfile may be a
> candidate for the row, but is there any other downside I am missing?
> ~Rahul.

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message