hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ted Yu <yuzhih...@gmail.com>
Subject Re: Hbase: Bulk Loading with Compression and DBE
Date Sun, 14 Dec 2014 19:01:12 GMT
Can you give the complete stack trace ?

You can try specifying hbase.mapreduce.hfileoutputformat.datablock.encoding as PREFIX_TREE

Thanks

On Dec 14, 2014, at 10:04 PM, Shashwat Mishra <shashwat.mishra@imag.fr> wrote:

> Hi all,
> 
> I am trying to bulk load some network-data into an Hbase table. My mapper emits ImmutableBytesWritable,
KeyValue pairs. I declare a pre-splitted table where the column families have compression
set to SNAPPY and Data Block Encoding set to PREFIX_TREE (hcd.setCompressionType(Algorithm.SNAPPY);
and hcd.setDataBlockEncoding(DataBlockEncoding.PREFIX_TREE);).
> 
> Subsequently I use HFileOutputFormat2.configureIncrementalLoad(job, table) which should
generate the HFiles for me. However, I run into the following error.
> 
> 14/12/14 12:23:04 INFO mapreduce.Job: map 100% reduce 80% 14/12/14 12:23:06 INFO mapreduce.Job:
Task Id : attempt_1416932815472_0065_r_000001_0, Status : FAILED Error: org.apache.hadoop.hbase.io.encoding.HFileBlockDefaultEncodingContext.<init>(Lorg/apache/hadoop/hbase/io/compress/Compression$Algorithm;Lorg/apache/hadoop/hbase/io/encoding/DataBlockEncoding;[B)V
> 
> Am I not configuring the compression properly ? I suspected that HFileOutputFormat2 may
not have support for compression/encoding but I get the same error when they are disabled.
Any ideas what's going on ?
> 
> I am using hbase version 0.96.1.1-cdh5.0.1
> 
> Thank You,
> 
> Shashwat

Mime
View raw message