Added: zeppelin/site/docs/0.7.0-SNAPSHOT/interpreter/r.html URL: http://svn.apache.org/viewvc/zeppelin/site/docs/0.7.0-SNAPSHOT/interpreter/r.html?rev=1751605&view=auto ============================================================================== --- zeppelin/site/docs/0.7.0-SNAPSHOT/interpreter/r.html (added) +++ zeppelin/site/docs/0.7.0-SNAPSHOT/interpreter/r.html Wed Jul 6 06:25:29 2016 @@ -0,0 +1,306 @@ + + + + + + R Interpreter + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + +
+
+

R Interpreter for Apache Zeppelin

+ +
+ +

Overview

+ +

R is a free software environment for statistical computing and graphics.

+ +

To run R code and visualize plots in Apache Zeppelin, you will need R on your master node (or your dev laptop).

+ +
    +
  • For Centos: yum install R R-devel libcurl-devel openssl-devel
  • +
  • For Ubuntu: apt-get install r-base
  • +
+ +

Validate your installation with a simple R command:

+
R -e "print(1+1)"
+
+

To enjoy plots, install additional libraries with:

+
+ devtools with `R -e "install.packages('devtools', repos = 'http://cran.us.r-project.org')"`
++ knitr with `R -e "install.packages('knitr', repos = 'http://cran.us.r-project.org')"`
++ ggplot2 with `R -e "install.packages('ggplot2', repos = 'http://cran.us.r-project.org')"`
++ Other vizualisation librairies: `R -e "install.packages(c('devtools','mplot', 'googleVis'), repos = 'http://cran.us.r-project.org'); require(devtools); install_github('ramnathv/rCharts')"`
+
+

We recommend you to also install the following optional R libraries for happy data analytics:

+ +
    +
  • glmnet
  • +
  • pROC
  • +
  • data.table
  • +
  • caret
  • +
  • sqldf
  • +
  • wordcloud
  • +
+ +

Configuration

+ +

To run Zeppelin with the R Interpreter, the SPARK_HOME environment variable must be set. The best way to do this is by editing conf/zeppelin-env.sh. +If it is not set, the R Interpreter will not be able to interface with Spark.

+ +

You should also copy conf/zeppelin-site.xml.template to conf/zeppelin-site.xml. That will ensure that Zeppelin sees the R Interpreter the first time it starts up.

+ +

Using the R Interpreter

+ +

By default, the R Interpreter appears as two Zeppelin Interpreters, %r and %knitr.

+ +

%r will behave like an ordinary REPL. You can execute commands as in the CLI.

+ +

+ +

R base plotting is fully supported

+ +

+ +

If you return a data.frame, Zeppelin will attempt to display it using Zeppelin's built-in visualizations.

+ +

+ +

%knitr interfaces directly against knitr, with chunk options on the first line:

+ +

+ +

+ +

+ +

The two interpreters share the same environment. If you define a variable from %r, it will be within-scope if you then make a call using knitr.

+ +

Using SparkR & Moving Between Languages

+ +

If SPARK_HOME is set, the SparkR package will be loaded automatically:

+ +

+ +

The Spark Context and SQL Context are created and injected into the local environment automatically as sc and sql.

+ +

The same context are shared with the %spark, %sql and %pyspark interpreters:

+ +

+ +

You can also make an ordinary R variable accessible in scala and Python:

+ +

+ +

And vice versa:

+ +

+ +

+ +

Caveats & Troubleshooting

+ +
    +
  • Almost all issues with the R interpreter turned out to be caused by an incorrectly set SPARK_HOME. The R interpreter must load a version of the SparkR package that matches the running version of Spark, and it does this by searching SPARK_HOME. If Zeppelin isn't configured to interface with Spark in SPARK_HOME, the R interpreter will not be able to connect to Spark.

  • +
  • The knitr environment is persistent. If you run a chunk from Zeppelin that changes a variable, then run the same chunk again, the variable has already been changed. Use immutable variables.

  • +
  • (Note that %spark.r and $r are two different ways of calling the same interpreter, as are %spark.knitr and %knitr. By default, Zeppelin puts the R interpreters in the %spark. Interpreter Group.

  • +
  • Using the %r interpreter, if you return a data.frame, HTML, or an image, it will dominate the result. So if you execute three commands, and one is hist(), all you will see is the histogram, not the results of the other commands. This is a Zeppelin limitation.

  • +
  • If you return a data.frame (for instance, from calling head()) from the %spark.r interpreter, it will be parsed by Zeppelin's built-in data visualization system.

  • +
  • Why knitr Instead of rmarkdown? Why no htmlwidgets? In order to support htmlwidgets, which has indirect dependencies, rmarkdown uses pandoc, which requires writing to and reading from disc. This makes it many times slower than knitr, which can operate entirely in RAM.

  • +
  • Why no ggvis or shiny? Supporting shiny would require integrating a reverse-proxy into Zeppelin, which is a task.

  • +
  • Max OS X & case-insensitive filesystem. If you try to install on a case-insensitive filesystem, which is the Mac OS X default, maven can unintentionally delete the install directory because r and R become the same subdirectory.

  • +
  • Error unable to start device X11 with the repl interpreter. Check your shell login scripts to see if they are adjusting the DISPLAY environment variable. This is common on some operating systems as a workaround for ssh issues, but can interfere with R plotting.

  • +
  • akka Library Version or TTransport errors. This can happen if you try to run Zeppelin with a SPARK_HOME that has a version of Spark other than the one specified with -Pspark-1.x when Zeppelin was compiled.

  • +
+ +
+
+ + +
+ +
+ + + + + + + + + + + Propchange: zeppelin/site/docs/0.7.0-SNAPSHOT/interpreter/r.html ------------------------------------------------------------------------------ svn:eol-style = native Added: zeppelin/site/docs/0.7.0-SNAPSHOT/interpreter/scalding.html URL: http://svn.apache.org/viewvc/zeppelin/site/docs/0.7.0-SNAPSHOT/interpreter/scalding.html?rev=1751605&view=auto ============================================================================== --- zeppelin/site/docs/0.7.0-SNAPSHOT/interpreter/scalding.html (added) +++ zeppelin/site/docs/0.7.0-SNAPSHOT/interpreter/scalding.html Wed Jul 6 06:25:29 2016 @@ -0,0 +1,333 @@ + + + + + + Scalding Interpreter + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + +
+
+

Scalding Interpreter for Apache Zeppelin

+ +
+ +

Scalding is an open source Scala library for writing MapReduce jobs.

+ +

Building the Scalding Interpreter

+ +

You have to first build the Scalding interpreter by enable the scalding profile as follows:

+
mvn clean package -Pscalding -DskipTests
+
+

Enabling the Scalding Interpreter

+ +

In a notebook, to enable the Scalding interpreter, click on the Gear icon,select Scalding, and hit Save.

+ +

+ +

Interpreter Binding

+ +

Interpreter Selection

+ +

+ +

Configuring the Interpreter

+ +

Scalding interpreter runs in two modes:

+ +
    +
  • local
  • +
  • hdfs
  • +
+ +

In the local mode, you can access files on the local server and scalding transformation are done locally.

+ +

In hdfs mode you can access files in HDFS and scalding transformation are run as hadoop map-reduce jobs.

+ +

Zeppelin comes with a pre-configured Scalding interpreter in local mode.

+ +

To run the scalding interpreter in the hdfs mode you have to do the following:

+ +

Set the classpath with ZEPPELIN_CLASSPATH_OVERRIDES

+ +

In conf/zeppelinenv.sh, you have to set +ZEPPELINCLASSPATH_OVERRIDES to the contents of 'hadoop classpath' +and directories with custom jar files you need for your scalding commands.

+ +

Set arguments to the scalding repl

+ +

The default arguments are: "--local --repl"

+ +

For hdfs mode you need to add: "--hdfs --repl"

+ +

If you want to add custom jars, you need to add: +"-libjars directory/:directory/"

+ +

For reducer estimation, you need to add something like: +"-Dscalding.reducer.estimator.classes=com.twitter.scalding.reducer_estimation.InputSizeReducerEstimator"

+ +

Set max.open.instances

+ +

If you want to control the maximum number of open interpreters, you have to select "scoped" interpreter for note +option and set max.open.instances argument.

+ +

Testing the Interpreter

+ +

Local mode

+ +

In example, by using the Alice in Wonderland tutorial, +we will count words (of course!), and plot a graph of the top 10 words in the book.

+
%scalding
+
+import scala.io.Source
+
+// Get the Alice in Wonderland book from gutenberg.org:
+val alice = Source.fromURL("http://www.gutenberg.org/files/11/11.txt").getLines
+val aliceLineNum = alice.zipWithIndex.toList
+val alicePipe = TypedPipe.from(aliceLineNum)
+
+// Now get a list of words for the book:
+val aliceWords = alicePipe.flatMap { case (text, _) => text.split("\\s+").toList }
+
+// Now lets add a count for each word:
+val aliceWithCount = aliceWords.filterNot(_.equals("")).map { word => (word, 1L) }
+
+// let's sum them for each word:
+val wordCount = aliceWithCount.group.sum
+
+print ("Here are the top 10 words\n")
+val top10 = wordCount
+  .groupAll
+  .sortBy { case (word, count) => -count }
+  .take(10)
+top10.dump
+
%scalding
+
+val table = "words\t count\n" + top10.toIterator.map{case (k, (word, count)) => s"$word\t$count"}.mkString("\n")
+print("%table " + table)
+
+

If you click on the icon for the pie chart, you should be able to see a chart like this: +Scalding - Pie - Chart

+ +

HDFS mode

+ +

Test mode

+
%scalding
+mode
+
+

This command should print:

+
res4: com.twitter.scalding.Mode = Hdfs(true,Configuration: core-default.xml, core-site.xml, mapred-default.xml, mapred-site.xml, yarn-default.xml, yarn-site.xml, hdfs-default.xml, hdfs-site.xml)
+
+

Test HDFS read

+
val testfile = TypedPipe.from(TextLine("/user/x/testfile"))
+testfile.dump
+
+

This command should print the contents of the hdfs file /user/x/testfile.

+ +

Test map-reduce job

+
val testfile = TypedPipe.from(TextLine("/user/x/testfile"))
+val a = testfile.groupAll.size.values
+a.toList
+
+

This command should create a map reduce job.

+ +

Future Work

+ +
    +
  • Better user feedback (hadoop url, progress updates)
  • +
  • Ability to cancel jobs
  • +
  • Ability to dynamically load jars without restarting the interpreter
  • +
  • Multiuser scalability (run scalding interpreters on different servers)
  • +
+ +
+
+ + +
+ +
+ + + + + + + + + + + Propchange: zeppelin/site/docs/0.7.0-SNAPSHOT/interpreter/scalding.html ------------------------------------------------------------------------------ svn:eol-style = native Added: zeppelin/site/docs/0.7.0-SNAPSHOT/interpreter/shell.html URL: http://svn.apache.org/viewvc/zeppelin/site/docs/0.7.0-SNAPSHOT/interpreter/shell.html?rev=1751605&view=auto ============================================================================== --- zeppelin/site/docs/0.7.0-SNAPSHOT/interpreter/shell.html (added) +++ zeppelin/site/docs/0.7.0-SNAPSHOT/interpreter/shell.html Wed Jul 6 06:25:29 2016 @@ -0,0 +1,216 @@ + + + + + + Shell Interpreter + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + +
+
+

Shell interpreter for Apache Zeppelin

+ +

Overview

+ +

Shell interpreter uses Apache Commons Exec to execute external processes.

+ +

In Zeppelin notebook, you can use %sh in the beginning of a paragraph to invoke system shell and run commands. +Note: Currently each command runs as Zeppelin user.

+ +

Example

+ +

The following example demonstrates the basic usage of Shell in a Zeppelin notebook.

+ +

+ +
+
+ + +
+ +
+ + + + + + + + + + + Propchange: zeppelin/site/docs/0.7.0-SNAPSHOT/interpreter/shell.html ------------------------------------------------------------------------------ svn:eol-style = native Added: zeppelin/site/docs/0.7.0-SNAPSHOT/interpreter/spark.html URL: http://svn.apache.org/viewvc/zeppelin/site/docs/0.7.0-SNAPSHOT/interpreter/spark.html?rev=1751605&view=auto ============================================================================== --- zeppelin/site/docs/0.7.0-SNAPSHOT/interpreter/spark.html (added) +++ zeppelin/site/docs/0.7.0-SNAPSHOT/interpreter/spark.html Wed Jul 6 06:25:29 2016 @@ -0,0 +1,586 @@ + + + + + + Spark Interpreter Group + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + +
+
+

Spark Interpreter for Apache Zeppelin

+ +
+ +

Overview

+ +

Apache Spark is a fast and general-purpose cluster computing system. +It provides high-level APIs in Java, Scala, Python and R, and an optimized engine that supports general execution graphs +Apache Spark is supported in Zeppelin with +Spark Interpreter group, which consists of five interpreters.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameClassDescription
%sparkSparkInterpreterCreates a SparkContext and provides a scala environment
%pysparkPySparkInterpreterProvides a python environment
%rSparkRInterpreterProvides an R environment with SparkR support
%sqlSparkSQLInterpreterProvides a SQL environment
%depDepInterpreterDependency loader
+ +

Configuration

+ +

The Spark interpreter can be configured with properties provided by Zeppelin. +You can also set other Spark properties which are not listed in the table. For a list of additional properties, refer to Spark Available Properties. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
PropertyDefaultDescription
argsSpark commandline args
masterlocal[*]Spark master uri.
ex) spark://masterhost:7077
spark.app.nameZeppelinThe name of spark application.
spark.cores.maxTotal number of cores to use.
Empty value uses all available core.
spark.executor.memory 512mExecutor memory per worker instance.
ex) 512m, 32g
zeppelin.dep.additionalRemoteRepositoryspark-packages,
http://dl.bintray.com/spark-packages/maven,
false;
A list of id,remote-repository-URL,is-snapshot;
for each remote repository.
zeppelin.dep.localrepolocal-repoLocal repository for dependency loader
zeppelin.pyspark.pythonpythonPython command to run pyspark with
zeppelin.spark.concurrentSQLfalseExecute multiple SQL concurrently if set true.
zeppelin.spark.maxResult1000Max number of SparkSQL result to display.
zeppelin.spark.printREPLOutputtruePrint REPL output
zeppelin.spark.useHiveContexttrueUse HiveContext instead of SQLContext if it is true.
zeppelin.spark.importImplicittrueImport implicits, UDF collection, and sql if set true.

+ +

Without any configuration, Spark interpreter works out of box in local mode. But if you want to connect to your Spark cluster, you'll need to follow below two simple steps.

+ +

1. Export SPARK_HOME

+ +

In conf/zeppelin-env.sh, export SPARK_HOME environment variable with your Spark installation path.

+ +

for example

+
export SPARK_HOME=/usr/lib/spark
+
+

You can optionally export HADOOP_CONF_DIR and SPARK_SUBMIT_OPTIONS

+
export HADOOP_CONF_DIR=/usr/lib/hadoop
+export SPARK_SUBMIT_OPTIONS="--packages com.databricks:spark-csv_2.10:1.2.0"
+
+

For Windows, ensure you have winutils.exe in %HADOOP_HOME%\bin. For more details please see Problems running Hadoop on Windows

+ +

2. Set master in Interpreter menu

+ +

After start Zeppelin, go to Interpreter menu and edit master property in your Spark interpreter setting. The value may vary depending on your Spark cluster deployment type.

+ +

for example,

+ +
    +
  • local[*] in local mode
  • +
  • spark://master:7077 in standalone cluster
  • +
  • yarn-client in Yarn client mode
  • +
  • mesos://host:5050 in Mesos cluster
  • +
+ +

That's it. Zeppelin will work with any version of Spark and any deployment type without rebuilding Zeppelin in this way. (Zeppelin 0.5.6-incubating release works up to Spark 1.6.1 )

+ +
+

Note that without exporting SPARK_HOME, it's running in local mode with included version of Spark. The included version may vary depending on the build profile.

+
+ +

SparkContext, SQLContext, ZeppelinContext

+ +

SparkContext, SQLContext, ZeppelinContext are automatically created and exposed as variable names 'sc', 'sqlContext' and 'z', respectively, both in scala and python environments.

+ +
+

Note that scala / python environment shares the same SparkContext, SQLContext, ZeppelinContext instance.

+
+ +

+ +

Dependency Management

+ +

There are two ways to load external library in spark interpreter. First is using Interpreter setting menu and second is loading Spark properties.

+ +

1. Setting Dependencies via Interpreter Setting

+ +

Please see Dependency Management for the details.

+ +

2. Loading Spark Properties

+ +

Once SPARK_HOME is set in conf/zeppelin-env.sh, Zeppelin uses spark-submit as spark interpreter runner. spark-submit supports two ways to load configurations. The first is command line options such as --master and Zeppelin can pass these options to spark-submit by exporting SPARK_SUBMIT_OPTIONS in conf/zeppelin-env.sh. Second is reading configuration options from SPARK_HOME/conf/spark-defaults.conf. Spark properites that user can set to distribute libraries are:

+ + + + + + + + + + + + + + + + + + + + + + + + + + +
spark-defaults.confSPARK_SUBMIT_OPTIONSApplicable InterpreterDescription
spark.jars--jars%sparkComma-separated list of local jars to include on the driver and executor classpaths.
spark.jars.packages--packages%sparkComma-separated list of maven coordinates of jars to include on the driver and executor classpaths. Will search the local maven repo, then maven central and any additional remote repositories given by --repositories. The format for the coordinates should be groupId:artifactId:version.
spark.files--files%pysparkComma-separated list of files to be placed in the working directory of each executor.
+ +
+

Note that adding jar to pyspark is only availabe via %dep interpreter at the moment.

+
+ +

Here are few examples:

+ +
    +
  • SPARK_SUBMIT_OPTIONS in conf/zeppelin-env.sh

    + +

    export SPARKSUBMITOPTIONS="--packages com.databricks:spark-csv_2.10:1.2.0 --jars /path/mylib1.jar,/path/mylib2.jar --files /path/mylib1.py,/path/mylib2.zip,/path/mylib3.egg"

  • +
  • SPARK_HOME/conf/spark-defaults.conf

    + +

    spark.jars /path/mylib1.jar,/path/mylib2.jar +spark.jars.packages com.databricks:spark-csv_2.10:1.2.0 +spark.files /path/mylib1.py,/path/mylib2.egg,/path/mylib3.zip

  • +
+ +

3. Dynamic Dependency Loading via %dep interpreter

+ +
+

Note: %dep interpreter is deprecated since v0.6.0. +%dep interpreter load libraries to %spark and %pyspark but not to %spark.sql interpreter so we recommend you to use first option instead.

+
+ +

When your code requires external library, instead of doing download/copy/restart Zeppelin, you can easily do following jobs using %dep interpreter.

+ +
    +
  • Load libraries recursively from Maven repository
  • +
  • Load libraries from local filesystem
  • +
  • Add additional maven repository
  • +
  • Automatically add libraries to SparkCluster (You can turn off)
  • +
+ +

Dep interpreter leverages scala environment. So you can write any Scala code here. +Note that %dep interpreter should be used before %spark, %pyspark, %sql.

+ +

Here's usages.

+
%dep
+z.reset() // clean up previously added artifact and repository
+
+// add maven repository
+z.addRepo("RepoName").url("RepoURL")
+
+// add maven snapshot repository
+z.addRepo("RepoName").url("RepoURL").snapshot()
+
+// add credentials for private maven repository
+z.addRepo("RepoName").url("RepoURL").username("username").password("password")
+
+// add artifact from filesystem
+z.load("/path/to.jar")
+
+// add artifact from maven repository, with no dependency
+z.load("groupId:artifactId:version").excludeAll()
+
+// add artifact recursively
+z.load("groupId:artifactId:version")
+
+// add artifact recursively except comma separated GroupID:ArtifactId list
+z.load("groupId:artifactId:version").exclude("groupId:artifactId,groupId:artifactId, ...")
+
+// exclude with pattern
+z.load("groupId:artifactId:version").exclude(*)
+z.load("groupId:artifactId:version").exclude("groupId:artifactId:*")
+z.load("groupId:artifactId:version").exclude("groupId:*")
+
+// local() skips adding artifact to spark clusters (skipping sc.addJar())
+z.load("groupId:artifactId:version").local()
+
+

ZeppelinContext

+ +

Zeppelin automatically injects ZeppelinContext as variable 'z' in your scala/python environment. ZeppelinContext provides some additional functions and utility.

+ +

Object Exchange

+ +

ZeppelinContext extends map and it's shared between scala, python environment. +So you can put some object from scala and read it from python, vise versa.

+ +
+
+ + +
// Put object from scala
+%spark
+val myObject = ...
+z.put("objName", myObject)
+
+ + +
+
+ + +
# Get object from python
+%pyspark
+myObject = z.get("objName")
+
+ + +
+
+ +

Form Creation

+ +

ZeppelinContext provides functions for creating forms. +In scala and python environments, you can create forms programmatically. +

+

+ +
%spark
+/* Create text input form */
+z.input("formName")
+
+/* Create text input form with default value */
+z.input("formName", "defaultValue")
+
+/* Create select form */
+z.select("formName", Seq(("option1", "option1DisplayName"),
+                         ("option2", "option2DisplayName")))
+
+/* Create select form with default value*/
+z.select("formName", "option1", Seq(("option1", "option1DisplayName"),
+                                    ("option2", "option2DisplayName")))
+
+ + +
+
+ + +
%pyspark
+# Create text input form
+z.input("formName")
+
+# Create text input form with default value
+z.input("formName", "defaultValue")
+
+# Create select form
+z.select("formName", [("option1", "option1DisplayName"),
+                      ("option2", "option2DisplayName")])
+
+# Create select form with default value
+z.select("formName", [("option1", "option1DisplayName"),
+                      ("option2", "option2DisplayName")], "option1")
+
+ + +
+
+ +

In sql environment, you can create form in simple template.

+
%sql
+select * from ${table=defaultTableName} where text like '%${search}%'
+
+

To learn more about dynamic form, checkout Dynamic Form.

+ +

Interpreter setting option

+ +

Interpreter setting can choose one of 'shared', 'scoped', 'isolated' option. Spark interpreter creates separate scala compiler per each notebook but share a single SparkContext in 'scoped' mode (experimental). It creates separate SparkContext per each notebook in 'isolated' mode.

+ +

Setting up Zeppelin with Kerberos

+ +

Logical setup with Zeppelin, Kerberos Key Distribution Center (KDC), and Spark on YARN:

+ +

+ +

Configuration Setup

+ +
    +
  1. On the server that Zeppelin is installed, install Kerberos client modules and configuration, krb5.conf. +This is to make the server communicate with KDC.

  2. +
  3. Set SPARK_HOME in [ZEPPELIN\_HOME]/conf/zeppelin-env.sh to use spark-submit +(Additionally, you might have to set export HADOOP\_CONF\_DIR=/etc/hadoop/conf)

  4. +
  5. Add the two properties below to spark configuration ([SPARK_HOME]/conf/spark-defaults.conf):

    +
    spark.yarn.principal
    +spark.yarn.keytab
    +
    +
    +

    NOTE: If you do not have access to the above spark-defaults.conf file, optionally, you may add the lines to the Spark Interpreter through the Interpreter tab in the Zeppelin UI.

    +
  6. +
  7. That's it. Play with Zeppelin!

  8. +
+ +
+
+ + +
+ +
+ + + + + + + + + + + Propchange: zeppelin/site/docs/0.7.0-SNAPSHOT/interpreter/spark.html ------------------------------------------------------------------------------ svn:eol-style = native