accumulo-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Eric Newton <eric.new...@gmail.com>
Subject Re: Large Data Size in Row or Value?
Date Mon, 01 Apr 2013 18:22:47 GMT
"What is the largest size that seems to work?"

Tablet servers have been run in 64M JVMs without a problem, so long as
there isn't any other pressure to swap that memory out (such as large
map/reduce jobs).  Since we've been keeping the New Generation size
down ("-XX:NewSize=500m
-XX:MaxNewSize=500m") we haven't seen any problems with long pauses in the
GC.

We may have run them at larger sizes, but not for very long.  The example
configurations are there for seeing up a single node in your personal
development space, so the emphasis was on smaller memory footprints.

-Eric


On Mon, Apr 1, 2013 at 10:33 AM, David Medinets <david.medinets@gmail.com>wrote:

> I have a chunk of data (let's say 400M) that I want to store in Accumulo.
> I can store the chunk in the ColumnFamily or in the Value. Does it make any
> difference to Accumulo which is used?
>
> My tserver is setup to use -Xmx3g. What is the largest size that seems to
> work? I have much more  that I can allocate.
>
> Or should I focus on breaking the data into smaller pieces ... say 128M
> each?
>
> Thanks.
>
>

Mime
View raw message