spark-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
Subject spark git commit: SPARK-5136 [DOCS] Improve documentation around setting up Spark IntelliJ project
Date Fri, 09 Jan 2015 17:35:51 GMT
Repository: spark
Updated Branches:
  refs/heads/master b4034c3f8 -> 547df9771

SPARK-5136 [DOCS] Improve documentation around setting up Spark IntelliJ project

This PR simply points to the IntelliJ wiki page instead of also including IntelliJ notes in
the docs. The intent however is to also update the wiki page with updated tips. This is the
text I propose for the IntelliJ section on the wiki. I realize it omits some of the existing
instructions on the wiki, about enabling Hive, but I think those are actually optional.


IntelliJ supports both Maven- and SBT-based projects. It is recommended, however, to import
Spark as a Maven project. Choose "Import Project..." from the File menu, and select the `pom.xml`
file in the Spark root directory.

It is fine to leave all settings at their default values in the Maven import wizard, with
two caveats. First, it is usually useful to enable "Import Maven projects automatically",
sincchanges to the project structure will automatically update the IntelliJ project.

Second, note the step that prompts you to choose active Maven build profiles. As documented
above, some build configuration require specific profiles to be enabled. The same profiles
that are enabled with `-P[profile name]` above may be enabled on this screen. For example,
if developing for Hadoop 2.4 with YARN support, enable profiles `yarn` and `hadoop-2.4`.

These selections can be changed later by accessing the "Maven Projects" tool window from the
View menu, and expanding the Profiles section.

"Rebuild Project" can fail the first time the project is compiled, because generate source
files are not automatically generated. Try clicking the  "Generate Sources and Update Folders
For All Projects" button in the "Maven Projects" tool window to manually generate these sources.

Compilation may fail with an error like "scalac: bad option: -P:/home/jakub/.m2/repository/org/scalamacros/paradise_2.10.4/2.0.1/paradise_2.10.4-2.0.1.jar".
If so, go to Preferences > Build, Execution, Deployment > Scala Compiler and clear the
"Additional compiler options" field. It will work then although the option will come back
when the project reimports.

Author: Sean Owen <>

Closes #3952 from srowen/SPARK-5136 and squashes the following commits:

f3baa66 [Sean Owen] Point to new IJ / Eclipse wiki link
016b7df [Sean Owen] Point to IntelliJ wiki page instead of also including IntelliJ notes in
the docs


Branch: refs/heads/master
Commit: 547df97715580f99ae573a49a86da12bf20cbc3d
Parents: b4034c3
Author: Sean Owen <>
Authored: Fri Jan 9 09:35:46 2015 -0800
Committer: Patrick Wendell <>
Committed: Fri Jan 9 09:35:46 2015 -0800

 docs/ | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/docs/ b/docs/
index c1bcd91..fb93017 100644
--- a/docs/
+++ b/docs/
@@ -151,9 +151,10 @@ Thus, the full flow for running continuous-compilation of the `core`
submodule m
  $ mvn scala:cc
-# Using With IntelliJ IDEA
+# Building Spark with IntelliJ IDEA or Eclipse
-This setup works fine in IntelliJ IDEA 11.1.4. After opening the project via the pom.xml
file in the project root folder, you only need to activate either the hadoop1 or hadoop2 profile
in the "Maven Properties" popout. We have not tried Eclipse/Scala IDE with this.
+For help in setting up IntelliJ IDEA or Eclipse for Spark development, and troubleshooting,
refer to the
+[wiki page for IDE setup](
 # Building Spark Debian Packages

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message