accumulo-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Eric Newton <eric.new...@gmail.com>
Subject Re: java.lang.OutOfMemoryError: GC overhead limit exceeded
Date Tue, 15 Dec 2015 15:51:02 GMT
This is actually a client issue, and not related to the server or its
performance.

The code sending updates to the server is spending so much time in java GC,
that it has decided to kill itself.

You may want to increase the size of the JVM used for ingest, probably by
using a larger value in ACCUMULO_OTHER_OPTS.

"No Such SessionID" errors are typical of a paused client: update sessions
time out and are forgotten. Your client ran low on memory, paused to GC,
and the server forgot about its session.

-Eric

On Tue, Dec 15, 2015 at 7:45 AM, mohit.kaushik <mohit.kaushik@orkash.com>
wrote:

> Dear All,
>
> I am getting the below mentioned exception on Client side while inserting
> data.
>
> *Exception in thread "Thrift Connection Pool Checker"
> java.lang.OutOfMemoryError: GC overhead limit exceeded*
> *ERROR - TabletServerBatchWriter.updateUnknownErrors(520) -  Failed to
> send tablet server orkash1:9997 its batch : GC overhead limit exceeded*
> *java.lang.OutOfMemoryError: GC overhead limit exceeded*
> *ERROR - ClientCnxn$1.uncaughtException(414) -  from
> main-SendThread(orkash2:2181)*
> *java.lang.OutOfMemoryError: GC overhead limit exceeded*
>
>
> This exception comes after few days of ingestion started. I already have
> assigned the appropriate memory to all components. I have a 3 node cluster
> with  Accumulo 1.7.0 and Hadoop 2.7.0 ( RAM 32 GB each). Accumulo masters,
> namenodes runs on different servers.
>
> * Accumulo-env.sh*
> ACCUMULO_TSERVER_OPTS="${POLICY} -Xmx8g -Xms3g  -XX:NewSize=500m
> -XX:MaxNewSize=500m "
> ACCUMULO_MASTER_OPTS="${POLICY} -Xmx1g -Xms1g"
> ACCUMULO_MONITOR_OPTS="${POLICY} -Xmx1g -Xms256m"
> ACCUMULO_GC_OPTS="-Xmx512m -Xms256m"
> ACCUMULO_GENERAL_OPTS="-XX:+UseConcMarkSweepGC -XX:SurvivorRatio=3
> -XX:CMSInitiatingOccupancyFraction=75 -Djava.net.preferIPv4Stack=true"
>
> * Accumulo-site.xml*
>   <property>
>     <name>tserver.memory.maps.max</name>
>     <value>2G</value>
>   </property>
>   <property>
>     <name>tserver.memory.maps.native.enabled</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>tserver.cache.data.size</name>
>     <value>2G</value>
>   </property>
>   <property>
>     <name>tserver.cache.index.size</name>
>     <value>1G</value>
>   </property>
> <property>
>     <name>tserver.sort.buffer.size</name>
>     <value>500M</value>
>   </property>
>   <property>
>     <name>tserver.walog.max.size</name>
>     <value>1G</value>
>   </property>
>
>
> I found that even after setting individual memory limits servers are
> utilizing its almost full memory(up to 21 GB cached). I am not running any
> other application on these server only Accumulo and Hadoop is deployed.
> why the server caching a lot of data(21 GB)
>
> When I scanned the logs. I found another exception in Accumulo tserver
> logs
>
> * org.apache.thrift.TException: No Such SessionID*
> *        at
> org.apache.accumulo.server.rpc.RpcWrapper$1.invoke(RpcWrapper.java:51)*
> *        at com.sun.proxy.$Proxy20.applyUpdates(Unknown Source)*
> *        at
> org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor$applyUpdates.getResult(TabletClientService.java:2425)*
> *        at
> org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor$applyUpdates.getResult(TabletClientService.java:2411)*
> *        at
> org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)*
> *        at
> org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)*
> *        at
> org.apache.accumulo.server.rpc.TimedProcessor.process(TimedProcessor.java:63)*
> *        at
> org.apache.thrift.server.AbstractNonblockingServer$FrameBuffer.invoke(AbstractNonblockingServer.java:516)*
> *        at
> org.apache.accumulo.server.rpc.CustomNonBlockingServer$1.run(CustomNonBlockingServer.java:78)*
> *        at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)*
> *        at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)*
> *        at
> org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)*
> *        at java.lang.Thread.run(Thread.java:745)*
>
>
>
> Thanks & Regards
> Mohit Kaushik
>
>

Mime
View raw message