accumulo-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "mohit.kaushik" <mohit.kaus...@orkash.com>
Subject java.lang.OutOfMemoryError: GC overhead limit exceeded
Date Tue, 15 Dec 2015 12:45:35 GMT
Dear All,

I am getting the below mentioned exception on Client side while 
inserting data.

    *Exception in thread "Thrift Connection Pool Checker"
    java.lang.OutOfMemoryError: GC overhead limit exceeded*
    *ERROR - TabletServerBatchWriter.updateUnknownErrors(520) - Failed
    to send tablet server orkash1:9997 its batch : GC overhead limit
    exceeded*
    ***java.lang.OutOfMemoryError: GC overhead limit exceeded*
    ***ERROR - ClientCnxn$1.uncaughtException(414) -  from
    main-SendThread(orkash2:2181)*
    ***java.lang.OutOfMemoryError: GC overhead limit exceeded*


This exception comes after few days of ingestion started. I already have 
assigned the appropriate memory to all components. I have a 3 node 
cluster with  Accumulo 1.7.0 and Hadoop 2.7.0 ( RAM 32 GB each). 
Accumulo masters, namenodes runs on different servers.
_*
Accumulo-env.sh*_
ACCUMULO_TSERVER_OPTS="${POLICY} -Xmx8g -Xms3g  -XX:NewSize=500m 
-XX:MaxNewSize=500m "
ACCUMULO_MASTER_OPTS="${POLICY} -Xmx1g -Xms1g"
ACCUMULO_MONITOR_OPTS="${POLICY} -Xmx1g -Xms256m"
ACCUMULO_GC_OPTS="-Xmx512m -Xms256m"
ACCUMULO_GENERAL_OPTS="-XX:+UseConcMarkSweepGC -XX:SurvivorRatio=3 
-XX:CMSInitiatingOccupancyFraction=75 -Djava.net.preferIPv4Stack=true"
_*
Accumulo-site.xml*_
   <property>
     <name>tserver.memory.maps.max</name>
     <value>2G</value>
   </property>
   <property>
     <name>tserver.memory.maps.native.enabled</name>
     <value>true</value>
   </property>
   <property>
     <name>tserver.cache.data.size</name>
     <value>2G</value>
   </property>
   <property>
     <name>tserver.cache.index.size</name>
     <value>1G</value>
   </property>
<property>
     <name>tserver.sort.buffer.size</name>
     <value>500M</value>
   </property>
   <property>
     <name>tserver.walog.max.size</name>
     <value>1G</value>
   </property>


I found that even after setting individual memory limits servers are 
utilizing its almost full memory(up to 21 GB cached). I am not running 
any other application on these server only Accumulo and Hadoop is 
deployed.  why the server caching a lot of data(21 GB)

When I scanned the logs. I found another exception in Accumulo tserver logs

    /*org.apache.thrift.TException: No Such SessionID*/
    /**//*        at
    org.apache.accumulo.server.rpc.RpcWrapper$1.invoke(RpcWrapper.java:51)*/
    /**//*        at com.sun.proxy.$Proxy20.applyUpdates(Unknown Source)*/
    /**//*        at
    org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor$applyUpdates.getResult(TabletClientService.java:2425)*/
    /**//*        at
    org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor$applyUpdates.getResult(TabletClientService.java:2411)*/
    /**//*        at
    org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)*/
    /**//*        at
    org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)*/
    /**//*        at
    org.apache.accumulo.server.rpc.TimedProcessor.process(TimedProcessor.java:63)*/
    /**//*        at
    org.apache.thrift.server.AbstractNonblockingServer$FrameBuffer.invoke(AbstractNonblockingServer.java:516)*/
    /**//*        at
    org.apache.accumulo.server.rpc.CustomNonBlockingServer$1.run(CustomNonBlockingServer.java:78)*/
    /**//*        at
    java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)*/
    /**//*        at
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)*/
    /**//*        at
    org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)*/
    /**//*        at java.lang.Thread.run(Thread.java:745)*/

//

Thanks & Regards
Mohit Kaushik
Signature

**//


Mime
View raw message