accumulo-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Eric Newton <eric.new...@gmail.com>
Subject Re: MutationsRejectedException and TApplicationException
Date Wed, 18 Jun 2014 16:39:12 GMT
This error is often a result of overwhelming your server resources.

It basically says "an update came in that was so old, the id used to
identify the sender has already aged off."

What is your expected ingest rate during the job?  What sort of resources
does accumulo have?


On Wed, Jun 18, 2014 at 7:09 AM, Jianshi Huang <jianshi.huang@gmail.com>
wrote:

> Here's the error message I got from the tserver_xxx.log
>
> 2014-06-18 01:06:06,816 [tserver.TabletServer] INFO : Adding 1 logs for
> extent g;cust:2072821;cust:20700111 as alias 37
> 2014-06-18 01:06:16,286 [thrift.ProcessFunction] ERROR: Internal error
> processing applyUpdates
> java.lang.RuntimeException: No Such SessionID
>         at
> org.apache.accumulo.tserver.TabletServer$ThriftClientHandler.applyUpdates(TabletServer.java:1522)
>         at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)
>         at
> org.apache.accumulo.trace.instrument.thrift.TraceWrap$1.invoke(TraceWrap.java:63)
>         at com.sun.proxy.$Proxy23.applyUpdates(Unknown Source)
>         at
> org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor$applyUpdates.getResult(TabletClientService.java:2347\
> )
>         at
> org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor$applyUpdates.getResult(TabletClientService.java:2333\
> )
>         at
> org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
>         at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
>         at
> org.apache.accumulo.server.util.TServerUtils$TimedProcessor.process(TServerUtils.java:171)
>         at
> org.apache.thrift.server.AbstractNonblockingServer$FrameBuffer.invoke(AbstractNonblockingServer.java:478)
>         at
> org.apache.accumulo.server.util.TServerUtils$THsHaServer$Invocation.run(TServerUtils.java:231)
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at
> org.apache.accumulo.trace.instrument.TraceRunnable.run(TraceRunnable.java:47)
>         at
> org.apache.accumulo.core.util.LoggingRunnable.run(LoggingRunnable.java:34)
>         at java.lang.Thread.run(Thread.java:724)
> 2014-06-18 01:06:16,287 [thrift.ProcessFunction] ERROR: Internal error
> processing applyUpdates
> java.lang.RuntimeException: No Such SessionID
>          at
> org.apache.accumulo.tserver.TabletServer$ThriftClientHandler.applyUpdates(TabletServer.java:1522)
>         at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)
>         at
> org.apache.accumulo.trace.instrument.thrift.TraceWrap$1.invoke(TraceWrap.java:63)
>         at com.sun.proxy.$Proxy23.applyUpdates(Unknown Source)
>         at
> org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor$applyUpdates.getResult(TabletClientService.java:2347\
> )
>         at
> org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor$applyUpdates.getResult(TabletClientService.java:2333\
> )
>         at
> org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
>         at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
>         at
> org.apache.accumulo.server.util.TServerUtils$TimedProcessor.process(TServerUtils.java:171)
>         at
> org.apache.thrift.server.AbstractNonblockingServer$FrameBuffer.invoke(AbstractNonblockingServer.java:478)
>         at
> org.apache.accumulo.server.util.TServerUtils$THsHaServer$Invocation.run(TServerUtils.java:231)
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at
> org.apache.accumulo.trace.instrument.TraceRunnable.run(TraceRunnable.java:47)
>         at
> org.apache.accumulo.core.util.LoggingRunnable.run(LoggingRunnable.java:34)
>         at java.lang.Thread.run(Thread.java:724)
> 2014-06-18 01:06:16,287 [util.TServerUtils$THsHaServer] WARN : Got an
> IOException during write!
> java.io.IOException: Connection reset by peer
>         at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
>         at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
>         at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
>         at sun.nio.ch.IOUtil.write(IOUtil.java:65)
>         at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:487)
>         at
> org.apache.thrift.transport.TNonblockingSocket.write(TNonblockingSocket.java:164)
>         at
> org.apache.thrift.server.AbstractNonblockingServer$FrameBuffer.write(AbstractNonblockingServer.java:381)
>         at
> org.apache.thrift.server.AbstractNonblockingServer$AbstractSelectThread.handleWrite(AbstractNonblockingServer.java:220)
>         at
> org.apache.thrift.server.TNonblockingServer$SelectAcceptThread.select(TNonblockingServer.java:201)
>
>
> Jianshi
>
>
> On Wed, Jun 18, 2014 at 2:54 PM, Jianshi Huang <jianshi.huang@gmail.com>
> wrote:
>
>> I see. I'll check the tablet server log and paste the error message in
>> later thread.
>>
>> BTW, looks like the AccumuloOutputFormat is the cause, I'm currently
>> using BatchWriter and it works well.
>>
>> My code looks like this (it's in Scala as I'm using Spark):
>>
>>     AccumuloOutputFormat.setZooKeeperInstance(job,
>> Conf.getString("accumulo.instance"),
>> Conf.getString("accumulo.zookeeper.servers"))
>>     AccumuloOutputFormat.setConnectorInfo(job,
>> Conf.getString("accumulo.user"), new
>> PasswordToken(Conf.getString("accumulo.password")))
>>     AccumuloOutputFormat.setDefaultTableName(job,
>> Conf.getString("accumulo.table"))
>>
>>     val paymentRDD: RDD[(Text, Mutation)] = payment.flatMap { payment =>
>> //      val key = new Text(Conf.getString("accumulo.table"))
>>       paymentMutations(payment).map((null, _))
>>     }
>>
>>
>> paymentRDD.saveAsNewAPIHadoopFile(Conf.getString("accumulo.instance"),
>> classOf[Void], classOf[Mutation], classOf[AccumuloOutputFormat],
>> job.getConfiguration)
>>
>>
>> It's also possible that saveAsNewAPIHadoopFile doesn't work well with
>> AccumuloOutputFormat.
>>
>>
>>
>> Jianshi
>>
>>
>>
>>
>> On Wed, Jun 18, 2014 at 12:45 PM, Josh Elser <josh.elser@gmail.com>
>> wrote:
>>
>>> Check the TabletServer logs. This Exception is telling you that there
>>> was an error on the server. You should look there for what the real problem
>>> was. You can do this one of two ways.
>>>
>>> 1) Use the "Recent Logs" page on the Accumulo monitor (
>>> http://accumulo_monitor_host:50095). Unless you cleared the logs, or
>>> restarted the monitor process since you got this error, you should be able
>>> to see a nice HTML view of any errors
>>>
>>> 2) Check the debug log, e.g. $ACCUMULO_HOME/logs/tserver_$host.debug.log.
>>> If you're running tservers on more than one node, be sure that you check
>>> the log files on all nodes.
>>>
>>> - Josh
>>>
>>>
>>> On 6/17/14, 9:33 PM, Jianshi Huang wrote:
>>>
>>>> Hi,
>>>>
>>>> I got the following errors during MapReduce ingestion, are they serious
>>>> errors?
>>>>
>>>> java.io.IOException:
>>>> org.apache.accumulo.core.client.MutationsRejectedException: #
>>>> constraint
>>>> violations : 0  security codes: {}  # server\ errors 1 # exceptions 0
>>>>          at
>>>> org.apache.accumulo.core.client.mapreduce.AccumuloOutputFormat$
>>>> AccumuloRecordWriter.write(AccumuloOutputFormat.java:437)
>>>>          at
>>>> org.apache.accumulo.core.client.mapreduce.AccumuloOutputFormat$
>>>> AccumuloRecordWriter.write(AccumuloOutputFormat.java:373)
>>>>          at org.apache.spark.rdd.PairRDDFunctions.org
>>>> <http://org.apache.spark.rdd.PairRDDFunctions.org>$apache$
>>>> spark$rdd$PairRDDFunctions$$writeShard$1(PairRDDFunctions.scala:716)
>>>>
>>>>          at
>>>> org.apache.spark.rdd.PairRDDFunctions$$anonfun$
>>>> saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:730)
>>>>          at
>>>> org.apache.spark.rdd.PairRDDFunctions$$anonfun$
>>>> saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:730)
>>>>          at
>>>> org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111)
>>>>          at org.apache.spark.scheduler.Task.run(Task.scala:51)
>>>>          at
>>>> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:187)
>>>>          at
>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(
>>>> ThreadPoolExecutor.java:1145)
>>>>          at
>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(
>>>> ThreadPoolExecutor.java:615)
>>>>          at java.lang.Thread.run(Thread.java:724)
>>>>
>>>>
>>>> And
>>>>
>>>> java.io.IOException: org.apache.accumulo.core.client.AccumuloException:
>>>> org.apache.thrift.TApplicationException: Internal error processing\
>>>>   applyUpdates
>>>>          at
>>>> org.apache.accumulo.core.client.mapreduce.AccumuloOutputFormat.
>>>> getRecordWriter(AccumuloOutputFormat.java:558)
>>>>          at org.apache.spark.rdd.PairRDDFunctions.org
>>>> <http://org.apache.spark.rdd.PairRDDFunctions.org>$apache$
>>>> spark$rdd$PairRDDFunctions$$writeShard$1(PairRDDFunctions.scala:712)
>>>>
>>>>          at
>>>> org.apache.spark.rdd.PairRDDFunctions$$anonfun$
>>>> saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:730)
>>>>          at
>>>> org.apache.spark.rdd.PairRDDFunctions$$anonfun$
>>>> saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:730)
>>>>          at
>>>> org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111)
>>>>          at org.apache.spark.scheduler.Task.run(Task.scala:51)
>>>>          at
>>>> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:187)
>>>>          at
>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(
>>>> ThreadPoolExecutor.java:1145)
>>>>          at
>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(
>>>> ThreadPoolExecutor.java:615)
>>>>          at java.lang.Thread.run(Thread.java:724)
>>>>
>>>>
>>>> Cheers,
>>>> --
>>>> Jianshi Huang
>>>>
>>>> LinkedIn: jianshi
>>>> Twitter: @jshuang
>>>> Github & Blog: http://huangjs.github.com/
>>>>
>>>
>>
>>
>> --
>> Jianshi Huang
>>
>> LinkedIn: jianshi
>> Twitter: @jshuang
>> Github & Blog: http://huangjs.github.com/
>>
>
>
>
> --
> Jianshi Huang
>
> LinkedIn: jianshi
> Twitter: @jshuang
> Github & Blog: http://huangjs.github.com/
>

Mime
View raw message