hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jean-Daniel Cryans <jdcry...@apache.org>
Subject Re: storing MB sized files in HBase
Date Tue, 15 Nov 2011 21:17:37 GMT
You *can*.

You don't have to adjust the HBase HFile block size since each object
will just take exactly one block.

You do want to adjust the HDFS block size higher.

The region size should always be managed disregard of your application.

One thing to keep in mind is that fat cells' not the typical use case,
that tunings for it are less well known, so you'll be more on your

Regarding your second solution, HDFS is one option but basically any
FS that can serve a lot of data live should be good. I think some
people recommend GlusterFS and it seems that MapR could do it too.


On Tue, Nov 8, 2011 at 2:02 PM, Sujee Maniyam <sujee@sujee.net> wrote:
> HI All
> I have data files (binary) that are in 2-5 MB in size.   Can I store them
> in a Hbase shell -- adjusting block-size and region-size ?   or should I
> store them in HDFS and store the pointer in Hbase?
> http://wiki.apache.org/hadoop/Hbase/FAQ_Design#A3
> says not to go beyond 10MB per cell.
> any tips on storing large sized content in HBase?
> thanks
> Sujee
> http://sujee.net

View raw message