hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jean-Daniel Cryans <jdcry...@apache.org>
Subject Re: Adjusting column value size.
Date Thu, 06 Oct 2011 17:49:03 GMT
(BCC'd common-user@ since this seems strictly HBase related)

Interesting question... And you probably need all those ints at the same
time right? No streaming? I'll assume no.

So the second solution seems better due to the overhead of storing each
cell. Basically, storing one int per cell you would end up storing more keys
than values (size wise).

Another thing is that if you pack enough ints together and there's some sort
of repetition, you might be able to use LZO compression on that table.

I'd love to hear about your experimentations once you've done them.

J-D

On Mon, Oct 3, 2011 at 10:58 PM, edward choi <mp2893@gmail.com> wrote:

> Hi,
>
> I have a question regarding the performance and column value size.
> I need to store per row several million integers. ("Several million" is
> important here)
> I was wondering which method would be more beneficial performance wise.
>
> 1) Store each integer to a single column so that when a row is called,
> several million columns will also be called. And the user would map each
> column values to some kind of container (ex: vector, arrayList)
> 2) Store, for example, a thousand integers into a single column (by
> concatenating them) so that when a row is called, only several thousand
> columns will be called along. The user would have to split the column value
> into 4 bytes and map the split integer to some kind of container (ex:
> vector, arrayList)
>
> I am curious which approach would be better. 1) would call several millions
> of columns but no additional process is needed. 2) would call only several
> thousands of columns but additional process is needed.
> Any advice would be appreciated.
>
> Ed
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message