accumulo-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Josh Elser <josh.el...@gmail.com>
Subject Re: MutationsRejectedException and TApplicationException
Date Wed, 18 Jun 2014 04:45:59 GMT
Check the TabletServer logs. This Exception is telling you that there 
was an error on the server. You should look there for what the real 
problem was. You can do this one of two ways.

1) Use the "Recent Logs" page on the Accumulo monitor 
(http://accumulo_monitor_host:50095). Unless you cleared the logs, or 
restarted the monitor process since you got this error, you should be 
able to see a nice HTML view of any errors

2) Check the debug log, e.g. 
$ACCUMULO_HOME/logs/tserver_$host.debug.log. If you're running tservers 
on more than one node, be sure that you check the log files on all nodes.

- Josh

On 6/17/14, 9:33 PM, Jianshi Huang wrote:
> Hi,
>
> I got the following errors during MapReduce ingestion, are they serious
> errors?
>
> java.io.IOException:
> org.apache.accumulo.core.client.MutationsRejectedException: # constraint
> violations : 0  security codes: {}  # server\ errors 1 # exceptions 0
>          at
> org.apache.accumulo.core.client.mapreduce.AccumuloOutputFormat$AccumuloRecordWriter.write(AccumuloOutputFormat.java:437)
>          at
> org.apache.accumulo.core.client.mapreduce.AccumuloOutputFormat$AccumuloRecordWriter.write(AccumuloOutputFormat.java:373)
>          at org.apache.spark.rdd.PairRDDFunctions.org
> <http://org.apache.spark.rdd.PairRDDFunctions.org>$apache$spark$rdd$PairRDDFunctions$$writeShard$1(PairRDDFunctions.scala:716)
>          at
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:730)
>          at
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:730)
>          at
> org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111)
>          at org.apache.spark.scheduler.Task.run(Task.scala:51)
>          at
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:187)
>          at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>          at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>          at java.lang.Thread.run(Thread.java:724)
>
>
> And
>
> java.io.IOException: org.apache.accumulo.core.client.AccumuloException:
> org.apache.thrift.TApplicationException: Internal error processing\
>   applyUpdates
>          at
> org.apache.accumulo.core.client.mapreduce.AccumuloOutputFormat.getRecordWriter(AccumuloOutputFormat.java:558)
>          at org.apache.spark.rdd.PairRDDFunctions.org
> <http://org.apache.spark.rdd.PairRDDFunctions.org>$apache$spark$rdd$PairRDDFunctions$$writeShard$1(PairRDDFunctions.scala:712)
>          at
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:730)
>          at
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:730)
>          at
> org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111)
>          at org.apache.spark.scheduler.Task.run(Task.scala:51)
>          at
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:187)
>          at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>          at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>          at java.lang.Thread.run(Thread.java:724)
>
>
> Cheers,
> --
> Jianshi Huang
>
> LinkedIn: jianshi
> Twitter: @jshuang
> Github & Blog: http://huangjs.github.com/

Mime
View raw message