spark-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From marmb...@apache.org
Subject git commit: [SQL][DOCS] Improve section on thrift-server
Date Tue, 16 Sep 2014 18:52:18 GMT
Repository: spark
Updated Branches:
  refs/heads/branch-1.1 99a6c5e5e -> 75158a7eb


[SQL][DOCS] Improve section on thrift-server

Taken from liancheng's updates. Merged conflicts with #2316.

Author: Michael Armbrust <michael@databricks.com>

Closes #2384 from marmbrus/sqlDocUpdate and squashes the following commits:

2db6319 [Michael Armbrust] @liancheng's updates

(cherry picked from commit 84073eb1172dc959936149265378f6e24d303685)
Signed-off-by: Michael Armbrust <michael@databricks.com>


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/75158a7e
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/75158a7e
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/75158a7e

Branch: refs/heads/branch-1.1
Commit: 75158a7eb4a7e8ae59a3c0c99f102271078856dc
Parents: 99a6c5e
Author: Michael Armbrust <michael@databricks.com>
Authored: Tue Sep 16 11:51:46 2014 -0700
Committer: Michael Armbrust <michael@databricks.com>
Committed: Tue Sep 16 11:52:02 2014 -0700

----------------------------------------------------------------------
 docs/sql-programming-guide.md | 58 ++++++++++++++++++++++++++------------
 1 file changed, 40 insertions(+), 18 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/75158a7e/docs/sql-programming-guide.md
----------------------------------------------------------------------
diff --git a/docs/sql-programming-guide.md b/docs/sql-programming-guide.md
index 409b372..2c22641 100644
--- a/docs/sql-programming-guide.md
+++ b/docs/sql-programming-guide.md
@@ -128,7 +128,7 @@ feature parity with a HiveContext.
 
 </div>
 
-The specific variant of SQL that is used to parse queries can also be selected using the

+The specific variant of SQL that is used to parse queries can also be selected using the
 `spark.sql.dialect` option.  This parameter can be changed using either the `setConf` method
on
 a SQLContext or by using a `SET key=value` command in SQL.  For a SQLContext, the only dialect
 available is "sql" which uses a simple SQL parser provided by Spark SQL.  In a HiveContext,
the
@@ -139,7 +139,7 @@ default is "hiveql", though "sql" is also available.  Since the HiveQL
parser is
 
 Spark SQL supports operating on a variety of data sources through the `SchemaRDD` interface.
 A SchemaRDD can be operated on as normal RDDs and can also be registered as a temporary table.
-Registering a SchemaRDD as a table allows you to run SQL queries over its data.  This section

+Registering a SchemaRDD as a table allows you to run SQL queries over its data.  This section
 describes the various methods for loading data into a SchemaRDD.
 
 ## RDDs
@@ -152,7 +152,7 @@ while writing your Spark application.
 The second method for creating SchemaRDDs is through a programmatic interface that allows
you to
 construct a schema and then apply it to an existing RDD.  While this method is more verbose,
it allows
 you to construct SchemaRDDs when the columns and their types are not known until runtime.
- 
+
 ### Inferring the Schema Using Reflection
 <div class="codetabs">
 
@@ -193,7 +193,7 @@ teenagers.map(t => "Name: " + t(0)).collect().foreach(println)
 <div data-lang="java"  markdown="1">
 
 Spark SQL supports automatically converting an RDD of [JavaBeans](http://stackoverflow.com/questions/3295496/what-is-a-javabean-exactly)
-into a Schema RDD.  The BeanInfo, obtained using reflection, defines the schema of the table.

+into a Schema RDD.  The BeanInfo, obtained using reflection, defines the schema of the table.
 Currently, Spark SQL does not support JavaBeans that contain
 nested or contain complex types such as Lists or Arrays.  You can create a JavaBean by creating
a
 class that implements Serializable and has getters and setters for all of its fields.
@@ -480,7 +480,7 @@ for name in names.collect():
 
 [Parquet](http://parquet.io) is a columnar format that is supported by many other data processing
systems.
 Spark SQL provides support for both reading and writing Parquet files that automatically
preserves the schema
-of the original data.  
+of the original data.
 
 ### Loading Data Programmatically
 
@@ -562,7 +562,7 @@ for teenName in teenNames.collect():
 
 </div>
 
-</div>                                                                            

+</div>
 
 ### Configuration
 
@@ -808,7 +808,7 @@ memory usage and GC pressure. You can call `uncacheTable("tableName")`
to remove
 Note that if you call `cache` rather than `cacheTable`, tables will _not_ be cached using
 the in-memory columnar format, and therefore `cacheTable` is strongly recommended for this
use case.
 
-Configuration of in-memory caching can be done using the `setConf` method on SQLContext or
by running 
+Configuration of in-memory caching can be done using the `setConf` method on SQLContext or
by running
 `SET key=value` commands using SQL.
 
 <table class="table">
@@ -881,10 +881,32 @@ To start the JDBC server, run the following in the Spark directory:
 
     ./sbin/start-thriftserver.sh
 
-The default port the server listens on is 10000.  To listen on customized host and port,
please set
-the `HIVE_SERVER2_THRIFT_PORT` and `HIVE_SERVER2_THRIFT_BIND_HOST` environment variables.
You may
-run `./sbin/start-thriftserver.sh --help` for a complete list of all available options. 
Now you can
-use beeline to test the Thrift JDBC server:
+This script accepts all `bin/spark-submit` command line options, plus a `--hiveconf` option
to
+specify Hive properties.  You may run `./sbin/start-thriftserver.sh --help` for a complete
list of
+all available options.  By default, the server listens on localhost:10000. You may override
this
+bahaviour via either environment variables, i.e.:
+
+{% highlight bash %}
+export HIVE_SERVER2_THRIFT_PORT=<listening-port>
+export HIVE_SERVER2_THRIFT_BIND_HOST=<listening-host>
+./sbin/start-thriftserver.sh \
+  --master <master-uri> \
+  ...
+```
+{% endhighlight %}
+
+or system properties:
+
+{% highlight bash %}
+./sbin/start-thriftserver.sh \
+  --hiveconf hive.server2.thrift.port=<listening-port> \
+  --hiveconf hive.server2.thrift.bind.host=<listening-host> \
+  --master <master-uri>
+  ...
+```
+{% endhighlight %}
+
+Now you can use beeline to test the Thrift JDBC server:
 
     ./bin/beeline
 
@@ -930,7 +952,7 @@ SQL deprecates this property in favor of `spark.sql.shuffle.partitions`,
whose d
 is 200. Users may customize this property via `SET`:
 
     SET spark.sql.shuffle.partitions=10;
-    SELECT page, count(*) c 
+    SELECT page, count(*) c
     FROM logs_last_month_cached
     GROUP BY page ORDER BY c DESC LIMIT 10;
 
@@ -1139,7 +1161,7 @@ evaluated by the SQL execution engine.  A full list of the functions
supported c
 <div data-lang="scala"  markdown="1">
 
 All data types of Spark SQL are located in the package `org.apache.spark.sql`.
-You can access them by doing 
+You can access them by doing
 {% highlight scala %}
 import  org.apache.spark.sql._
 {% endhighlight %}
@@ -1245,7 +1267,7 @@ import  org.apache.spark.sql._
 <tr>
   <td> <b>StructType</b> </td>
   <td> org.apache.spark.sql.Row </td>
-  <td> 
+  <td>
   StructType(<i>fields</i>)<br />
   <b>Note:</b> <i>fields</i> is a Seq of StructFields. Also, two
fields with the same
   name are not allowed.
@@ -1267,7 +1289,7 @@ import  org.apache.spark.sql._
 
 All data types of Spark SQL are located in the package of
 `org.apache.spark.sql.api.java`. To access or create a data type,
-please use factory methods provided in 
+please use factory methods provided in
 `org.apache.spark.sql.api.java.DataType`.
 
 <table class="table">
@@ -1373,7 +1395,7 @@ please use factory methods provided in
 <tr>
   <td> <b>StructType</b> </td>
   <td> org.apache.spark.sql.api.java </td>
-  <td> 
+  <td>
   DataType.createStructType(<i>fields</i>)<br />
   <b>Note:</b> <i>fields</i> is a List or an array of StructFields.
   Also, two fields with the same name are not allowed.
@@ -1394,7 +1416,7 @@ please use factory methods provided in
 <div data-lang="python"  markdown="1">
 
 All data types of Spark SQL are located in the package of `pyspark.sql`.
-You can access them by doing 
+You can access them by doing
 {% highlight python %}
 from pyspark.sql import *
 {% endhighlight %}
@@ -1518,7 +1540,7 @@ from pyspark.sql import *
 <tr>
   <td> <b>StructType</b> </td>
   <td> list or tuple </td>
-  <td> 
+  <td>
   StructType(<i>fields</i>)<br />
   <b>Note:</b> <i>fields</i> is a Seq of StructFields. Also, two
fields with the same
   name are not allowed.


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org


Mime
View raw message