accumulo-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Terry P." <>
Subject Re: OutOfMemoryError: Java heap space after data load
Date Mon, 29 Apr 2013 21:07:34 GMT
Hi John,
I attempted to start the shell with --disable-tab-completion but it still
failed in an identical manner.  What is that feature/option?

The ACCUMULO_OTHER_OPTS var was set to "-Xmx256m -Xms64m" via the 2GB
example config script.  I upped the -Xmx256m to 512m and the shell started
successfully, so thanks!

What would cause the shell to need more than 256m of memory just to start?
I'd like to understand how to determine an appropriate value to set


On Mon, Apr 29, 2013 at 2:21 PM, John Vines <> wrote:

> The shell gets it's memory config from the accumulo-env file from
> ACCUMULO_OTHER_OPTS. If, for some reason, the value was low or there was a
> lot of data being loaded for the tab completion stuff in the shell, it
> could die. You can try upping that value in the file or try running the
> shell with "--disable-tab-completion" to see if that helps.
> On Mon, Apr 29, 2013 at 3:02 PM, Terry P. <> wrote:
>> Greetings folks,
>> I have stood up our 8-node Accumulo 1.4.2 cluster consisting of 3
>> ZooKeepers, 1 NameNode (also runs Accumulo Master, Monitor, and GC), and 3
>> DataNodes / TabletServers (Secondary NameNode with Alternate Accumulo
>> Master process will follow).  The initial config files were copied from the
>> 2GB/native-standalone directory.
>> For a quick test I have a text file I generated to load 500,000 rows of
>> sample data using the Accumulo shell.  For lack of a better place to run it
>> this first time, I ran it on the NameNode.  The script performs flushes
>> every 10,000 records (about 30,000 entries).  After the load finished, when
>> I attempt to login to the Accumulo Shell on the NameNode, I get the error:
>> [root@edib-namenode ~]# /usr/lib/accumulo/bin/accumulo shell -u $AUSER
>> #
>> # java.lang.OutOfMemoryError: Java heap space
>> # -XX:OnOutOfMemoryError="kill -9 %p"
>> #   Executing /bin/sh -c "kill -9 24899"...
>> Killed
>> The performance of that test was pretty poor at about 160/second
>> (somewhat expected, as it was just one thread) so to keep moving I
>> generated 3 different load files and ran one on each of the 3 DataNodes /
>> TabletServers.  Performance was much better, sustaining 1,400 per second.
>> Again, the test data load files have flush commands every 10,000 records
>> (30,000 entries), including at the end of the file.
>> However, as with the NameNode, now I cannot login to the Accumulo shell
>> on any of the DataNodes either, as I get the same OutOfMemoryError.
>> My /etc/security/limits.conf file is set with 64000 for nofile and 32000
>> for nproc for the hdfs user (which is also running Accumulo, I haven't
>> split accumulo out yet).
>> I don't see any errors in the tserver or logger logs (standard and debug)
>> or any info related to the shell failing to load.  I'm at a loss with
>> respect to where to look.  The servers have 16GB of memory, and each has
>> about 14GB currently free.
>> Any help would be greatly appreciated.
>> Best regards,
>> Terry

View raw message