hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hudson (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-10456) Bug in Configuration.java exposed by Spark (ConcurrentModificationException)
Date Fri, 04 Apr 2014 11:21:23 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-10456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13959878#comment-13959878
] 

Hudson commented on HADOOP-10456:
---------------------------------

SUCCESS: Integrated in Hadoop-Yarn-trunk #529 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/529/])
HADOOP-10456. Bug in Configuration.java exposed by Spark (ConcurrentModificationException).
Contributed by Nishkam Ravi. (cnauroth: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1584575)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java


> Bug in Configuration.java exposed by Spark (ConcurrentModificationException)
> ----------------------------------------------------------------------------
>
>                 Key: HADOOP-10456
>                 URL: https://issues.apache.org/jira/browse/HADOOP-10456
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: conf
>    Affects Versions: 2.3.0
>            Reporter: Nishkam Ravi
>            Assignee: Nishkam Ravi
>             Fix For: 3.0.0, 2.4.1
>
>         Attachments: HADOOP-10456_nravi.patch
>
>
> The following exception occurs non-deterministically:
> java.util.ConcurrentModificationException
>         at java.util.HashMap$HashIterator.nextEntry(HashMap.java:926)
>         at java.util.HashMap$KeyIterator.next(HashMap.java:960)
>         at java.util.AbstractCollection.addAll(AbstractCollection.java:341)
>         at java.util.HashSet.<init>(HashSet.java:117)
>         at org.apache.hadoop.conf.Configuration.<init>(Configuration.java:671)
>         at org.apache.hadoop.mapred.JobConf.<init>(JobConf.java:439)
>         at org.apache.spark.rdd.HadoopRDD.getJobConf(HadoopRDD.scala:110)
>         at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:154)
>         at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:149)
>         at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:64)
>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
>         at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
>         at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
>         at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
>         at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:34)
>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
>         at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:161)
>         at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:102)
>         at org.apache.spark.scheduler.Task.run(Task.scala:53)
>         at org.apache.spark.executor.Executor$TaskRunner$$anonfun$run$1.apply$mcV$sp(Executor.scala:213)
>         at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:42)
>         at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:41)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
>         at org.apache.spark.deploy.SparkHadoopUtil.runAsUser(SparkHadoopUtil.scala:41)
>         at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:744)



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message