hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Nick Dimiduk <ndimi...@gmail.com>
Subject Re: HBase with opentsdb creates huge .tmp file & runs out of hdfs space
Date Tue, 13 Jan 2015 18:18:29 GMT
Ouch :/

Not to be a pedant, but you're sure HDFS is configured against the 2TB of
space (and not, say, the root partition)? You're sure the the temporary
file is growing to 2TB (hadoop fs -ls output)? Are you using any
BlockEncoding or Compression with this column family? Any other store/table
configuration? This happens repeatably? Can you provide jstack of the RS
process along with log lines while this file is growing excessively?

Thanks,
Nick

On Tue, Jan 13, 2015 at 9:47 AM, sathyafmt <sathyafmt@gmail.com> wrote:

> Thanks Esteban. We do have lots of space ~2TB. The compaction starts on
> around a 300MB column and dies after consuming all the 2TB of space.
>
> sathya
>
>
>
> --
> View this message in context:
> http://apache-hbase.679495.n3.nabble.com/HBase-with-opentsdb-creates-huge-tmp-file-runs-out-of-hdfs-space-tp4067577p4067583.html
> Sent from the HBase User mailing list archive at Nabble.com.
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message