spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dogtail L <spark.ru...@gmail.com>
Subject Re: How to compile Spark with customized Hadoop?
Date Thu, 15 Oct 2015 05:20:13 GMT
Hi,

When I publish my version of Hadoop, it is installed in:
/HOME_DIRECTORY/.m2/repository/org/apache/hadoop, but when I compile Spark,
it will fetch Hadoop libraries from
https://repo1.maven.org/maven2/org/apache/hadoop. How can I let Spark fetch
Hadoop libraries from my local M2 cache? Great thanks!

On Fri, Oct 9, 2015 at 5:31 PM, Matei Zaharia <matei.zaharia@gmail.com>
wrote:

> You can publish your version of Hadoop to your Maven cache with mvn
> publish (just give it a different version number, e.g. 2.7.0a) and then
> pass that as the Hadoop version to Spark's build (see
> http://spark.apache.org/docs/latest/building-spark.html).
>
> Matei
>
> On Oct 9, 2015, at 3:10 PM, Dogtail L <spark.rui92@gmail.com> wrote:
>
> Hi all,
>
> I have modified Hadoop source code, and I want to compile Spark with my
> modified Hadoop. Do you know how to do that? Great thanks!
>
>
>

Mime
View raw message