spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sean Owen (JIRA)" <>
Subject [jira] [Commented] (SPARK-1954) Make it easier to get Spark on YARN code to compile in IntelliJ
Date Sat, 31 May 2014 10:15:02 GMT


Sean Owen commented on SPARK-1954:

How about the profiles just set a default Hadoop / YARN version? i.e. hadoop-2.3 sets hadoop.version=2.3.0?
I think this was briefly discussed, and the idea was to make sure the user is explicitly setting
a version, but I'm not sure this side-effect was in mind at the time.

This makes the regular Maven build work in an IDE like this. I think it would be better to
run directly off the project build rather than re-export the SBT build repeatedly.

> Make it easier to get Spark on YARN code to compile in IntelliJ
> ---------------------------------------------------------------
>                 Key: SPARK-1954
>                 URL:
>             Project: Spark
>          Issue Type: Improvement
>          Components: Build
>    Affects Versions: 1.0.0
>            Reporter: Sandy Ryza
> When loading a project through a Maven pom, IntelliJ allows switching on profiles, but,
to my knowledge, doesn't provide a way to set arbitrary properties. 
> To get Spark-on-YARN code to compile in IntelliJ, I need to manually change the hadoop.version
in the root pom.xml to 2.2.0 or higher.  This is very cumbersome when switching branches.
> It would be really helpful to add a profile that sets the Hadoop version that IntelliJ
can switch on.

This message was sent by Atlassian JIRA

View raw message