crunch-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Josh Wills (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (CRUNCH-568) Aggregators fail on SparkPipeline
Date Tue, 06 Oct 2015 21:32:26 GMT

     [ https://issues.apache.org/jira/browse/CRUNCH-568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Josh Wills updated CRUNCH-568:
------------------------------
    Attachment: CRUNCH-568.patch

So I think the right fix here is to not use a null key inside of Aggregators.aggregate instead
of adding a check to UniformHashPartitioner, which is what the attached patch does. But I'm
open to being overruled-- [~gabriel.reid] or [~mkwhitacre]?

> Aggregators fail on SparkPipeline
> ---------------------------------
>
>                 Key: CRUNCH-568
>                 URL: https://issues.apache.org/jira/browse/CRUNCH-568
>             Project: Crunch
>          Issue Type: Bug
>          Components: Spark
>    Affects Versions: 0.12.0
>            Reporter: Nithin Asokan
>         Attachments: CRUNCH-568.patch
>
>
> Logging this based on mailing list discussion
> http://mail-archives.apache.org/mod_mbox/crunch-user/201510.mbox/%3CCANb5z2KBqxZng92ToFo0MdTk2fd8jtGTjZ85h1yUo_akaetcXg%40mail.gmail.com%3E
> Running a Crunch SparkPipeline with FirstN aggregator results in a NullPointerException.

> Example to recreate this
> https://gist.github.com/nasokan/853ff80ce20ad7a78886
> Stack trace on driver logs. 
> {code}
> 15/10/05 16:02:33 WARN TaskSetManager: Lost task 3.0 in stage 0.0 (TID 0, 123.domain.xyz):
java.lang.NullPointerException
>     at org.apache.crunch.impl.mr.run.UniformHashPartitioner.getPartition(UniformHashPartitioner.java:32)
>     at org.apache.crunch.impl.spark.fn.PartitionedMapOutputFunction.call(PartitionedMapOutputFunction.java:62)
>     at org.apache.crunch.impl.spark.fn.PartitionedMapOutputFunction.call(PartitionedMapOutputFunction.java:35)
>     at org.apache.spark.api.java.JavaPairRDD$$anonfun$pairFunToScalaFun$1.apply(JavaPairRDD.scala:1002)
>     at org.apache.spark.api.java.JavaPairRDD$$anonfun$pairFunToScalaFun$1.apply(JavaPairRDD.scala:1002)
>     at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
>     at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
>     at org.apache.spark.util.collection.ExternalSorter.spillToPartitionFiles(ExternalSorter.scala:366)
>     at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:211)
>     at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63)
>     at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
>     at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
>     at org.apache.spark.scheduler.Task.run(Task.scala:64)
>     at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203)
>     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>     at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message