accumulo-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "mohit.kaushik" <>
Subject Re:Tablet servers died (exceeded maximum hold time)
Date Tue, 05 Jan 2016 12:06:38 GMT

Apologies for the delayed reply, but do i really need to expand my 
cluster. The ingest rate is not so high( around 100 - 500 docs/sec) and 
I hope, Accumulo can handle it easily with 3 Tservers.

The swap memory problem is not longer exists. Servers are not using any 
swap memory now. And after diagnosing the clients, the mutations 
rejected exception also does not appear till now. I reduced the number 
of threads running on clients to ingest data and tuned some JVM 
parameters. Thanks

But I ran into a new problem

org.apache.accumulo.tserver.HoldTimeoutException: Commits are held

due to this two if my tservers died or killed by master. This is serious

2016-01-01 00:16:34,829 [master.Master] WARN : Tablet server orkash1:9997[1517fe14295029c]
exceeded maximum hold time: attempting to kill it
2016-01-01 00:16:34,829 [master.LiveTServerSet] INFO : Removing zookeeper lock for orkash1:9997[1517fe14295029c]
2015-12-31 23:27:39,973 [master.Master] WARN : Tablet server orkash3:9997[351edd882900000]
exceeded maximum hold time: attempting to kill it
2015-12-31 23:27:39,992 [master.LiveTServerSet] INFO : Removing zookeeper lock for orkash3:9997[351edd882900000]

Further, I found other exceptions in the monitor logs related to writer, 
flushes and some more .

(1) Problem flushing traces, resetting writer. Set log level to DEBUG to see stacktrace cause:
org.apache.accumulo.core.client.MutationsRejectedException: # constraint violations : 0  security
codes: {}  # server errors 1 # exceptions 0
(2) Server side error on orkash2:9997: org.apache.thrift.TApplicationException: Internal error
processing closeUpdate
(3) Unable to write mutation to table; discarding span.Set log level to DEBUG to see stacktrace
cause: org.apache.accumulo.core.client.MutationsRejectedException: # constraint violations
: 0  security codes: {}  # server errors 1 # exceptions 1
(4) Problem closing batch writer. Set log level to DEBUG to see stacktrace. cause: org.apache.accumulo.core.client.MutationsRejectedException:
# constraint violations : 0  security codes: {}  # server errors 1 # exceptions 1
(5) Commits are held
	org.apache.accumulo.tserver.HoldTimeoutException: Commits are held
		at org.apache.accumulo.tserver.TabletServerResourceManager.waitUntilCommitsAreEnabled(
		at org.apache.accumulo.tserver.TabletServer$ThriftClientHandler.flush(
		at org.apache.accumulo.tserver.TabletServer$ThriftClientHandler.closeUpdate(
		at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source)
		at sun.reflect.DelegatingMethodAccessorImpl.invoke(
		at java.lang.reflect.Method.invoke(
		at org.apache.accumulo.core.trace.wrappers.RpcServerInvocationHandler.invoke(
		at org.apache.accumulo.server.rpc.RpcWrapper$1.invoke(
		at com.sun.proxy.$Proxy19.closeUpdate(Unknown Source)
		at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor$closeUpdate.getResult(
		at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor$closeUpdate.getResult(
		at org.apache.thrift.ProcessFunction.process(
		at org.apache.thrift.TBaseProcessor.process(
		at org.apache.accumulo.server.rpc.TimedProcessor.process(
		at org.apache.thrift.server.AbstractNonblockingServer$FrameBuffer.invoke(
		at org.apache.accumulo.server.rpc.CustomNonBlockingServer$
		at java.util.concurrent.ThreadPoolExecutor.runWorker(
		at java.util.concurrent.ThreadPoolExecutor$
(6) Assignment for 4<;5684390f has been running for at least 822922ms
	java.lang.Exception: Assignment of 4<;5684390f
		at java.lang.Object.wait(Native Method)
		at java.lang.Object.wait(
		at org.apache.hadoop.hdfs.DFSOutputStream.waitAndQueueCurrentPacket(
		at org.apache.hadoop.hdfs.DFSOutputStream.writeChunkImpl(
		at org.apache.hadoop.hdfs.DFSOutputStream.writeChunk(
		at org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunks(
		at org.apache.hadoop.fs.FSOutputSummer.write1(
		at org.apache.hadoop.fs.FSOutputSummer.write(
		at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(
		at org.apache.accumulo.core.file.rfile.bcfile.SimpleBufferedOutputStream.flushBuffer(
		at org.apache.accumulo.core.file.rfile.bcfile.SimpleBufferedOutputStream.flush(
		at org.apache.accumulo.core.file.rfile.bcfile.Compression$FinishOnFlushCompressionStream.flush(
		at org.apache.accumulo.core.file.rfile.bcfile.BCFile$Writer$WBlockState.finish(
		at org.apache.accumulo.core.file.rfile.bcfile.BCFile$Writer$BlockAppender.close(
		at org.apache.accumulo.core.file.blockfile.impl.CachableBlockFile$BlockWrite.close(
		at org.apache.accumulo.core.file.rfile.RFile$Writer.closeBlock(
		at org.apache.accumulo.core.file.rfile.RFile$Writer.append(
		at org.apache.accumulo.tserver.tablet.Compactor.compactLocalityGroup(
		at org.apache.accumulo.tserver.tablet.Tablet.minorCompact(
		at org.apache.accumulo.tserver.tablet.Tablet.minorCompactNow(
		at org.apache.accumulo.tserver.TabletServer$
		at java.util.concurrent.ThreadPoolExecutor.runWorker(
		at java.util.concurrent.ThreadPoolExecutor$

*I expected that all the tablets hosted by these tservers will be moved 
to the only alive tserver and the mutations going to the tablets will 
all be moved to the alive tserver but this wasn't the case* more than 
100 tablets were left unassigned and all the mutations going to those 
servers got rejected, Strange? why the tablets did not move to other 
tserver? I need to now but no info in logs. And why the hold time 
exceeded maximum limit? can it be a network issue? Please provide your 
inputs and help me to handle this.
//*org.apache.accumulo.core.client.MutationsRejectedException: # 
constraint violations : 0  security codes: {}  # server errors 0 # 
exceptions 17*//
//    at 
//    at 
//    at 
//    at 
//    at com.orkash.db.DBQuery.insertDB(
//    at
//    at
////*Caused by: org.apache.accumulo.core.client.TimedOutException: 
Failed to obtain metadata*//
//    at 
//    at 
//    at 
//    at 
//    at 
//    at 
//    at 
//    at java.util.TimerThread.mainLoop(
//    at

I can post more details, if you want and start a new thread for this, if 

-Mohit Kaushik

On 12/26/2015 06:24 PM, Eric Newton wrote:
> Generally speaking, rejected mutations due to resource contention is 
> considered a system failure, requiring a re-examination of system 
> resources.
> That requires re-architecting your ingest or adding significant resources.
> You could do some substantial pre-processing of your ingest and 
> bulk-load the result. It will increase latency of the incoming 
> information, but it will reduce the pressure on accumulo.
> Or, as I suggested, you could increase your processing/storage by an 
> order of magnitude. That is why the software is built to handle 
> hundreds (or more) nodes.
> 3-5G of swap out of 32G is not a lot. But why is it using any at all?  
> Pulling 3G from disk is not going to be very fast. If you must, reduce 
> the size of your tserver. Focus on keeping your system at zero swap.
> I suggest, again, that you consider expanding your system to many more 
> nodes.  Accumulo is not written in hand-tuned assembler. It was 
> written with the knowledge that more hardware is pretty cheap, and 
> scaling up is better than small inefficiencies.
> On Thu, Dec 24, 2015 at 5:49 AM, mohit.kaushik 
> < <>> wrote:
>     @ Eric:  yes I have notices 3GB to 5GB swap uses out of 32GB on
>     servers. And if I will resend the mutations rejected explicitly
>     then this may create a loop for mutations getting rejected again
>     and again. Then how can I handle it? How did you? Am i getting it
>     right?
>     @ Josh: For one of the zookeeper host I was sharing the same drive
>     to store zookeeper data and hadoop datanode. I have changed it to
>     the same drive as others have. I hope this will resolve zookeeper
>     issue. lets see
>     BTW, here is my zoo.cfg
>     clientPort=2181
>     dataDir=/usr/local/zookeeper/data/
>     syncLimit=5
>     tickTime=2000
>     initLimit=10
>     maxClientCnxn=100
>     server.1=orkash1:2888:3888
>     server.2=orkash2:2888:3888
>     server.3=orkash3:2888:3888
>     Thanks a lot
>     Mohit Kaushik
>     On 12/24/2015 12:47 AM, Josh Elser wrote:
>>     Eric Newton wrote:
>>>     Failure to talk to zookeeper is *really* unexpected.
>>>     Have you noticed your nodes using any significant swap?
>>     Emphasis on this. Failing to connect to ZooKeeper for 60s (2*30)
>>     is a very long time (although, I think I have seen JVM GC pauses
>>     longer before).
>>     A couple of generic ZooKeeper questions:
>>     1. Can you share your zoo.cfg?
>>     2. Make sure that ZooKeeper has a "dedicated" drive for it's
>>     dataDir. HDFS DataNodes using the same drive as ZooKeeper for its
>>     transaction log can cause ZooKeeper to be starved for I/O
>>     throughput. A normal "spinning" disk is also better for ZK over
>>     SSDs (last I read).
>>     3. Check OS/host level metrics on these ZooKeeper hosts during
>>     the times you see these failures.
>>     4. Consider moving your ZooKeeper hosts to "less busy" nodes if
>>     you can. You can consider adding more ZooKeeper hosts to the
>>     quorum, but keep in mind that this will increase the minimum
>>     latency for ZooKeeper operations (as more nodes need to
>>     acknowledge updates n/2 + 1)

View raw message