Return-Path: X-Original-To: apmail-spark-user-archive@minotaur.apache.org Delivered-To: apmail-spark-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 0ACF9184C1 for ; Wed, 14 Oct 2015 19:40:54 +0000 (UTC) Received: (qmail 71759 invoked by uid 500); 14 Oct 2015 19:40:50 -0000 Delivered-To: apmail-spark-user-archive@spark.apache.org Received: (qmail 71661 invoked by uid 500); 14 Oct 2015 19:40:50 -0000 Mailing-List: contact user-help@spark.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list user@spark.apache.org Received: (qmail 71651 invoked by uid 99); 14 Oct 2015 19:40:50 -0000 Received: from Unknown (HELO spamd2-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 14 Oct 2015 19:40:50 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd2-us-west.apache.org (ASF Mail Server at spamd2-us-west.apache.org) with ESMTP id 06CE71A2356 for ; Wed, 14 Oct 2015 19:40:50 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd2-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 2.879 X-Spam-Level: ** X-Spam-Status: No, score=2.879 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=3, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, SPF_PASS=-0.001] autolearn=disabled Authentication-Results: spamd2-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx1-us-west.apache.org ([10.40.0.8]) by localhost (spamd2-us-west.apache.org [10.40.0.9]) (amavisd-new, port 10024) with ESMTP id fjMFPHD5tzzb for ; Wed, 14 Oct 2015 19:40:48 +0000 (UTC) Received: from mail-yk0-f169.google.com (mail-yk0-f169.google.com [209.85.160.169]) by mx1-us-west.apache.org (ASF Mail Server at mx1-us-west.apache.org) with ESMTPS id 6817F20751 for ; Wed, 14 Oct 2015 19:40:48 +0000 (UTC) Received: by ykfy204 with SMTP id y204so34959237ykf.1 for ; Wed, 14 Oct 2015 12:40:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=bb2gCzTTzVAGteoFsdX8PQCgMjkfaXcN0mA4BSmzac4=; b=DpUnRHcvHxdIr+ARnhZoFHcguKkyQzzgIbNnAAVYEyMjIR9QWSWqlSEmttmu4H0D3s ThroxeNc0Gmfhc7ZkwTUXiDkxvGW4Nt1kX2aejUrun6JlVx0dgcBkMGK+betgLyNl3ap 8ljhjTC2STzesy6HOYHKxmtYzXWOnieUsm+IuM57QVsUDY1KEq7VeZ6xZNfv39Qe/ftw gUFfYoED57BHHzbBC7dk2LLC4JgHHtQm2cwwZ43eBGUrjOXrAvh5Ah8MTE5pVfolGVDq 0hGTe0ysHIIRKs6h0OCwh2qQliZR0To6ebSNOm3+B/27UWASVj+Sjv5C3m/AZRBGj+S5 PnBg== MIME-Version: 1.0 X-Received: by 10.13.202.80 with SMTP id m77mr3683024ywd.344.1444851647729; Wed, 14 Oct 2015 12:40:47 -0700 (PDT) Received: by 10.37.45.69 with HTTP; Wed, 14 Oct 2015 12:40:47 -0700 (PDT) In-Reply-To: References: Date: Wed, 14 Oct 2015 16:40:47 -0300 Message-ID: Subject: Re: Running in cluster mode causes native library linking to fail From: Bernardo Vecchia Stein To: =?UTF-8?Q?Renato_Marroqu=C3=ADn_Mogrovejo?= Cc: Deenar Toraskar , user Content-Type: multipart/alternative; boundary=001a114f2c2c375ccc052215bd88 --001a114f2c2c375ccc052215bd88 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Hi Renato, I have done that as well, but so far no luck. I believe spark is finding the library correctly, otherwise the error message would be "no libraryname found" or something like that. The problem seems to be something else, and I'm not sure how to find it. Thanks, Bernardo On 14 October 2015 at 16:28, Renato Marroqu=C3=ADn Mogrovejo < renatoj.marroquin@gmail.com> wrote: > You can also try setting the env variable LD_LIBRARY_PATH to point where > your compiled libraries are. > > > Renato M. > > 2015-10-14 21:07 GMT+02:00 Bernardo Vecchia Stein < > bernardovstein@gmail.com>: > >> Hi Deenar, >> >> Yes, the native library is installed on all machines of the cluster. I >> tried a simpler approach by just using System.load() and passing the exa= ct >> path of the library, and things still won't work (I get exactly the same >> error and message). >> >> Any ideas of what might be failing? >> >> Thank you, >> Bernardo >> >> On 14 October 2015 at 02:50, Deenar Toraskar >> wrote: >> >>> Hi Bernardo >>> >>> Is the native library installed on all machines of your cluster and are >>> you setting both the spark.driver.extraLibraryPath and >>> spark.executor.extraLibraryPath ? >>> >>> Deenar >>> >>> >>> >>> On 14 October 2015 at 05:44, Bernardo Vecchia Stein < >>> bernardovstein@gmail.com> wrote: >>> >>>> Hello, >>>> >>>> I am trying to run some scala code in cluster mode using spark-submit. >>>> This code uses addLibrary to link with a .so that exists in the machin= e, >>>> and this library has a function to be called natively (there's a nativ= e >>>> definition as needed in the code). >>>> >>>> The problem I'm facing is: whenever I try to run this code in cluster >>>> mode, spark fails with the following message when trying to execute th= e >>>> native function: >>>> java.lang.UnsatisfiedLinkError: >>>> org.name.othername.ClassName.nativeMethod([B[B)[B >>>> >>>> Apparently, the library is being found by spark, but the required >>>> function isn't found. >>>> >>>> When trying to run in client mode, however, this doesn't fail and >>>> everything works as expected. >>>> >>>> Does anybody have any idea of what might be the problem here? Is there >>>> any bug that could be related to this when running in cluster mode? >>>> >>>> I appreciate any help. >>>> Thanks, >>>> Bernardo >>>> >>> >>> >> > --001a114f2c2c375ccc052215bd88 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
Hi Renato,

I have done that as= well, but so far no luck. I believe spark is finding the library correctly= , otherwise the error message would be "no libraryname found" or = something like that. The problem seems to be something else, and I'm no= t sure how to find it.

Thanks,
Bernardo

On 14 October 2015 at= 16:28, Renato Marroqu=C3=ADn Mogrovejo <renatoj.marroquin@gmail= .com> wrote:
You can also try setting the env variable LD_LIBRARY_PATH to point where= your compiled libraries are.


Renato M.

2015-10-14 21:07 GMT+02:00 Bernardo Vecchi= a Stein <bernardovstein@gmail.com>:
Hi Deenar,

Yes, the native library is installed on all machines of the cluster. I t= ried a simpler approach by just using System.load() and passing the exact p= ath of the library, and things still won't work (I get exactly the same= error and message).

Any ideas of what might be failing?
Thank you,
Bernardo
On 14 October 2015 at 02:50, Deenar Toraskar <= span dir=3D"ltr"><deenar.toraskar@gmail.com> wrote:
Hi=C2=A0B= ernardo

<= span style=3D"font-size:12.8px">Is the native library installed on all mach= ines of your cluster and are you setting both the=C2=A0spark.driver.extraLibraryPath and=C2=A0<= /span>spark.executor.extraLibraryPath=C2=A0?

Deenar



On 14 October 2015 at 0= 5:44, Bernardo Vecchia Stein <bernardovstein@gmail.com> wrote:
Hello,

I am trying to run some scala code= in cluster mode using spark-submit. This code uses addLibrary to link with= a .so that exists in the machine, and this library has a function to be ca= lled natively (there's a native definition as needed in the code).
<= br>
The problem I'm facing is: whenever I try to run this code in = cluster mode, spark fails with the following message when trying to execute= the native function:
java.lang.UnsatisfiedLinkError: org.nam= e.othername.ClassName.nativeMethod([B[B)[B

Apparently, th= e library is being found by spark, but the required function isn't foun= d.

When trying to run in client mode, however, this doesn'= t fail and everything works as expected.

Does anybody have any= idea of what might be the problem here? Is there any bug that could be rel= ated to this when running in cluster mode?

I appreciate any he= lp.
Thanks,
Bernardo




--001a114f2c2c375ccc052215bd88--