spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dmitriy Lyubimov <dlie...@gmail.com>
Subject Re: Dependency Versions
Date Fri, 16 Aug 2013 05:09:57 GMT
PS i guess one way of handling those common problems on spark's side
would be dropping its akka and netty classes in a separate java
namespace which is parallel to the user's namespace, but no it is not
there yet afaik -- at least not w.r.t akka and stuff.

On Thu, Aug 15, 2013 at 10:06 PM, Dmitriy Lyubimov <dlieu.7@gmail.com> wrote:
> I don't know if you can (other than trying to rebuild spark with
> _your_ version of netty, nothing really comes to mind) but netty
> conflict is a notorious problem (not spark specific at all). Spark
> uses akka which uses netty and even if you successfully override netty
> version, my experience is that  akka will stop working (meaning there
> will be no successful communication between any spark processes at
> all).
>
> The only way I know is to clean out netty from your job and hope it
> still will work (e.g. hbase client tugs netty in dependencies but in
> reality never uses it so it is ok to just drop it).
>
> On Thu, Aug 15, 2013 at 7:02 PM, Daniel Duckworth <dux@premise.com> wrote:
>> tl;dr How does Spark decide precedence when loading JARs? Is there a way to
>> force Spark executors to draw all dependencies from my fat JAR?
>>
>> Hello Spark users,
>>
>> Recently, I've written a Spark task that makes use of finagle for making RPC
>> calls.  The version of finagle I'm using (6.5.0) makes use of netty
>> 3.5.12.Final, while spark 0.7.0 uses netty 3.5.3. I've recently toyed with
>> my Maven POM to prefer netty 3.5.12 and can successfully run the task with
>> `MASTER=local`, `MASTER=local[8]` and `MASTER=spark://127.0.0.1:7077` (when
>> I start up a Spark master and worker locally), but my attempts to run
>> remotely on EC2 with a standalone Spark cluster have resulted in the
>> following NoSuchMethod exceptions on initializing my finagle client,
>>
>> INFO  [2013-08-16T00:58:13.572] [spark-akka.actor.default-dispatcher-3:12]
>> spark.scheduler.cluster.TaskSetManager: Loss was due to
>> java.lang.NoSuchMethodError:
>> org.jboss.netty.channel.socket.nio.NioWorkerPool.<init>(Ljava/util/concurrent/
>> Executor;I)V
>>         at
>> com.twitter.finagle.netty3.WorkerPool$.<init>(WorkerPool.scala:12)
>>         at com.twitter.finagle.netty3.WorkerPool$.<clinit>(WorkerPool.scala)
>>         at
>> com.twitter.finagle.netty3.Netty3Transporter$$anon$1.<init>(client.scala:225)
>>         at
>> com.twitter.finagle.netty3.Netty3Transporter$.<init>(client.scala:224)
>>         at
>> com.twitter.finagle.netty3.Netty3Transporter$.<clinit>(client.scala)
>>
>> The remote Spark server and workers are using netty 3.5.3, so I suspect that
>> their version of netty is overshadowing the one submitted in my fat JAR.
>>
>> Is there a way I can force Spark executors to draw dependencies from my fat
>> JAR, rather than what they already have?
>>
>> Thanks!
>>
>> - Daniel Duckworth

Mime
View raw message