spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Raghavendra Pandey <raghavendra.pan...@gmail.com>
Subject Re: How to compile Spark with customized Hadoop?
Date Sat, 10 Oct 2015 18:04:09 GMT
There is spark without hadoop version.. You can use that to link with any
custom hadoop version.

Raghav
On Oct 10, 2015 5:34 PM, "Steve Loughran" <stevel@hortonworks.com> wrote:

>
> During development, I'd recommend giving Hadoop a version ending with
> -SNAPSHOT, and building spark with maven, as mvn knows to refresh the
> snapshot every day.
>
> you can do this in hadoop with
>
> mvn versions:set 2.7.0.stevel-SNAPSHOT
>
> if you are working on hadoop branch-2 or trunk direct, they come with
> -SNAPSHOT anyway, but unless you build hadoop every morning, you may find
> maven pulls in the latest nightly builds from the apache snapshot
> repository, which will cause chaos and confusion. This is also why you must
> never have maven build which spans midnight in your time zone.
>
>
> On 9 Oct 2015, at 22:31, Matei Zaharia <matei.zaharia@gmail.com> wrote:
>
> You can publish your version of Hadoop to your Maven cache with mvn
> publish (just give it a different version number, e.g. 2.7.0a) and then
> pass that as the Hadoop version to Spark's build (see
> http://spark.apache.org/docs/latest/building-spark.html).
>
> Matei
>
> On Oct 9, 2015, at 3:10 PM, Dogtail L <spark.rui92@gmail.com> wrote:
>
> Hi all,
>
> I have modified Hadoop source code, and I want to compile Spark with my
> modified Hadoop. Do you know how to do that? Great thanks!
>
>
>
>

Mime
View raw message