drill-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From tshi...@apache.org
Subject drill git commit: meaningful title
Date Sat, 30 May 2015 19:32:53 GMT
Repository: drill
Updated Branches:
  refs/heads/gh-pages 6ff9a719a -> 5deca3ed9


meaningful title

misnamed file

title formats

inconsistent title format

technical corrections, wordsmithing

merge conflict remnant

wordsmith

forgotten Mehant review correction

fix links 2nd try

minor edit

replace table going off page

fix problems w/Mehant review changes


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/5deca3ed
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/5deca3ed
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/5deca3ed

Branch: refs/heads/gh-pages
Commit: 5deca3ed9fe9da08ca409de7a7b025337a4c2a2f
Parents: 6ff9a71
Author: Kristine Hahn <khahn@maprtech.com>
Authored: Sat May 30 10:16:16 2015 -0700
Committer: Kristine Hahn <khahn@maprtech.com>
Committed: Sat May 30 11:55:10 2015 -0700

----------------------------------------------------------------------
 _data/docs.json                                 |  62 +++---
 _docs/110-troubleshooting.md                    |  46 ++---
 _docs/architecture/030-performance.md           |   2 +-
 .../020-storage-plugin-registration.md          |  10 +-
 .../035-plugin-configuration-basics.md          | 198 ++++++++++++++++++
 .../035-plugin-configuration-introduction.md    | 199 -------------------
 .../040-file-system-storage-plugin.md           |  12 +-
 .../080-drill-default-input-format.md           |   7 +-
 .../090-mongodb-plugin-for-apache-drill.md      |  26 +--
 .../connect-a-data-source/100-mapr-db-format.md |   2 +-
 .../050-json-data-model.md                      |  21 +-
 .../045-distributed-mode-prerequisites.md       |  26 +++
 .../install/045-embedded-mode-prerequisites.md  |  23 ---
 .../030-starting-drill-on-linux-and-mac-os-x.md |   2 +-
 .../050-starting-drill-on-windows.md            |   2 +-
 .../020-using-jdbc-with-squirrel-on-windows.md  |  78 ++++----
 .../030-querying-plain-text-files.md            |   2 +-
 .../data-types/010-supported-data-types.md      |  42 ++--
 .../data-types/020-date-time-and-timestamp.md   |   8 +-
 .../030-handling-different-data-types.md        |   2 +-
 .../sql-functions/020-data-type-conversion.md   |  14 +-
 .../030-date-time-functions-and-arithmetic.md   |  32 ++-
 22 files changed, 391 insertions(+), 425 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/5deca3ed/_data/docs.json
----------------------------------------------------------------------
diff --git a/_data/docs.json b/_data/docs.json
index 25cc5ff..ff291e5 100644
--- a/_data/docs.json
+++ b/_data/docs.json
@@ -1440,9 +1440,9 @@
                             "parent": "Storage Plugin Configuration", 
                             "previous_title": "Storage Plugin Configuration", 
                             "previous_url": "/docs/storage-plugin-configuration/", 
-                            "relative_path": "_docs/connect-a-data-source/035-plugin-configuration-introduction.md", 
-                            "title": "Plugin Configuration Introduction", 
-                            "url": "/docs/plugin-configuration-introduction/"
+                            "relative_path": "_docs/connect-a-data-source/035-plugin-configuration-basics.md", 
+                            "title": "Plugin Configuration Basics", 
+                            "url": "/docs/plugin-configuration-basics/"
                         }, 
                         {
                             "breadcrumbs": [
@@ -1459,8 +1459,8 @@
                             "next_title": "Workspaces", 
                             "next_url": "/docs/workspaces/", 
                             "parent": "Storage Plugin Configuration", 
-                            "previous_title": "Plugin Configuration Introduction", 
-                            "previous_url": "/docs/plugin-configuration-introduction/", 
+                            "previous_title": "Plugin Configuration Basics", 
+                            "previous_url": "/docs/plugin-configuration-basics/", 
                             "relative_path": "_docs/connect-a-data-source/040-file-system-storage-plugin.md", 
                             "title": "File System Storage Plugin", 
                             "url": "/docs/file-system-storage-plugin/"
@@ -1550,8 +1550,8 @@
                             "url": "/docs/drill-default-input-format/"
                         }
                     ], 
-                    "next_title": "Plugin Configuration Introduction", 
-                    "next_url": "/docs/plugin-configuration-introduction/", 
+                    "next_title": "Plugin Configuration Basics", 
+                    "next_url": "/docs/plugin-configuration-basics/", 
                     "parent": "Connect a Data Source", 
                     "previous_title": "Storage Plugin Registration", 
                     "previous_url": "/docs/storage-plugin-registration/", 
@@ -2696,7 +2696,7 @@
             "parent": "Installing Drill in Distributed Mode", 
             "previous_title": "Installing Drill in Distributed Mode", 
             "previous_url": "/docs/installing-drill-in-distributed-mode/", 
-            "relative_path": "_docs/install/045-embedded-mode-prerequisites.md", 
+            "relative_path": "_docs/install/045-distributed-mode-prerequisites.md", 
             "title": "Distributed Mode Prerequisites", 
             "url": "/docs/distributed-mode-prerequisites/"
         }, 
@@ -2968,8 +2968,8 @@
             "next_title": "Workspaces", 
             "next_url": "/docs/workspaces/", 
             "parent": "Storage Plugin Configuration", 
-            "previous_title": "Plugin Configuration Introduction", 
-            "previous_url": "/docs/plugin-configuration-introduction/", 
+            "previous_title": "Plugin Configuration Basics", 
+            "previous_url": "/docs/plugin-configuration-basics/", 
             "relative_path": "_docs/connect-a-data-source/040-file-system-storage-plugin.md", 
             "title": "File System Storage Plugin", 
             "url": "/docs/file-system-storage-plugin/"
@@ -3426,7 +3426,7 @@
                             "parent": "Installing Drill in Distributed Mode", 
                             "previous_title": "Installing Drill in Distributed Mode", 
                             "previous_url": "/docs/installing-drill-in-distributed-mode/", 
-                            "relative_path": "_docs/install/045-embedded-mode-prerequisites.md", 
+                            "relative_path": "_docs/install/045-distributed-mode-prerequisites.md", 
                             "title": "Distributed Mode Prerequisites", 
                             "url": "/docs/distributed-mode-prerequisites/"
                         }, 
@@ -3534,7 +3534,7 @@
                     "parent": "Installing Drill in Distributed Mode", 
                     "previous_title": "Installing Drill in Distributed Mode", 
                     "previous_url": "/docs/installing-drill-in-distributed-mode/", 
-                    "relative_path": "_docs/install/045-embedded-mode-prerequisites.md", 
+                    "relative_path": "_docs/install/045-distributed-mode-prerequisites.md", 
                     "title": "Distributed Mode Prerequisites", 
                     "url": "/docs/distributed-mode-prerequisites/"
                 }, 
@@ -5552,7 +5552,7 @@
             "title": "Planning and Execution Options", 
             "url": "/docs/planning-and-execution-options/"
         }, 
-        "Plugin Configuration Introduction": {
+        "Plugin Configuration Basics": {
             "breadcrumbs": [
                 {
                     "title": "Storage Plugin Configuration", 
@@ -5569,9 +5569,9 @@
             "parent": "Storage Plugin Configuration", 
             "previous_title": "Storage Plugin Configuration", 
             "previous_url": "/docs/storage-plugin-configuration/", 
-            "relative_path": "_docs/connect-a-data-source/035-plugin-configuration-introduction.md", 
-            "title": "Plugin Configuration Introduction", 
-            "url": "/docs/plugin-configuration-introduction/"
+            "relative_path": "_docs/connect-a-data-source/035-plugin-configuration-basics.md", 
+            "title": "Plugin Configuration Basics", 
+            "url": "/docs/plugin-configuration-basics/"
         }, 
         "Ports Used by Drill": {
             "breadcrumbs": [
@@ -9344,9 +9344,9 @@
                     "parent": "Storage Plugin Configuration", 
                     "previous_title": "Storage Plugin Configuration", 
                     "previous_url": "/docs/storage-plugin-configuration/", 
-                    "relative_path": "_docs/connect-a-data-source/035-plugin-configuration-introduction.md", 
-                    "title": "Plugin Configuration Introduction", 
-                    "url": "/docs/plugin-configuration-introduction/"
+                    "relative_path": "_docs/connect-a-data-source/035-plugin-configuration-basics.md", 
+                    "title": "Plugin Configuration Basics", 
+                    "url": "/docs/plugin-configuration-basics/"
                 }, 
                 {
                     "breadcrumbs": [
@@ -9363,8 +9363,8 @@
                     "next_title": "Workspaces", 
                     "next_url": "/docs/workspaces/", 
                     "parent": "Storage Plugin Configuration", 
-                    "previous_title": "Plugin Configuration Introduction", 
-                    "previous_url": "/docs/plugin-configuration-introduction/", 
+                    "previous_title": "Plugin Configuration Basics", 
+                    "previous_url": "/docs/plugin-configuration-basics/", 
                     "relative_path": "_docs/connect-a-data-source/040-file-system-storage-plugin.md", 
                     "title": "File System Storage Plugin", 
                     "url": "/docs/file-system-storage-plugin/"
@@ -9454,8 +9454,8 @@
                     "url": "/docs/drill-default-input-format/"
                 }
             ], 
-            "next_title": "Plugin Configuration Introduction", 
-            "next_url": "/docs/plugin-configuration-introduction/", 
+            "next_title": "Plugin Configuration Basics", 
+            "next_url": "/docs/plugin-configuration-basics/", 
             "parent": "Connect a Data Source", 
             "previous_title": "Storage Plugin Registration", 
             "previous_url": "/docs/storage-plugin-registration/", 
@@ -11068,7 +11068,7 @@
                             "parent": "Installing Drill in Distributed Mode", 
                             "previous_title": "Installing Drill in Distributed Mode", 
                             "previous_url": "/docs/installing-drill-in-distributed-mode/", 
-                            "relative_path": "_docs/install/045-embedded-mode-prerequisites.md", 
+                            "relative_path": "_docs/install/045-distributed-mode-prerequisites.md", 
                             "title": "Distributed Mode Prerequisites", 
                             "url": "/docs/distributed-mode-prerequisites/"
                         }, 
@@ -11494,9 +11494,9 @@
                             "parent": "Storage Plugin Configuration", 
                             "previous_title": "Storage Plugin Configuration", 
                             "previous_url": "/docs/storage-plugin-configuration/", 
-                            "relative_path": "_docs/connect-a-data-source/035-plugin-configuration-introduction.md", 
-                            "title": "Plugin Configuration Introduction", 
-                            "url": "/docs/plugin-configuration-introduction/"
+                            "relative_path": "_docs/connect-a-data-source/035-plugin-configuration-basics.md", 
+                            "title": "Plugin Configuration Basics", 
+                            "url": "/docs/plugin-configuration-basics/"
                         }, 
                         {
                             "breadcrumbs": [
@@ -11513,8 +11513,8 @@
                             "next_title": "Workspaces", 
                             "next_url": "/docs/workspaces/", 
                             "parent": "Storage Plugin Configuration", 
-                            "previous_title": "Plugin Configuration Introduction", 
-                            "previous_url": "/docs/plugin-configuration-introduction/", 
+                            "previous_title": "Plugin Configuration Basics", 
+                            "previous_url": "/docs/plugin-configuration-basics/", 
                             "relative_path": "_docs/connect-a-data-source/040-file-system-storage-plugin.md", 
                             "title": "File System Storage Plugin", 
                             "url": "/docs/file-system-storage-plugin/"
@@ -11604,8 +11604,8 @@
                             "url": "/docs/drill-default-input-format/"
                         }
                     ], 
-                    "next_title": "Plugin Configuration Introduction", 
-                    "next_url": "/docs/plugin-configuration-introduction/", 
+                    "next_title": "Plugin Configuration Basics", 
+                    "next_url": "/docs/plugin-configuration-basics/", 
                     "parent": "Connect a Data Source", 
                     "previous_title": "Storage Plugin Registration", 
                     "previous_url": "/docs/storage-plugin-registration/", 

http://git-wip-us.apache.org/repos/asf/drill/blob/5deca3ed/_docs/110-troubleshooting.md
----------------------------------------------------------------------
diff --git a/_docs/110-troubleshooting.md b/_docs/110-troubleshooting.md
index b3e20ea..e0c4227 100644
--- a/_docs/110-troubleshooting.md
+++ b/_docs/110-troubleshooting.md
@@ -28,7 +28,7 @@ Issue the following command to enable the verbose errors option:
 ## Troubleshooting
 If you have any issues in Drill, search the following list for your issue and apply the suggested solution:
 
-**Query Parsing Errors**  
+### Query Parsing Errors  
 Symptom:  
 
        PARSE ERROR: At line x, column x: ...
@@ -36,7 +36,7 @@ Solution: Verify that you are using valid syntax. See [SQL Reference]({{ site.ba
 If you are using common words, they may be reserved words.  Make sure to use back ticks
 Confirm that you are using back ticks to quote identifiers when using special characters such as back slashes or periods from a file path.
 
-**Reserved Words**  
+### Reserved Words  
 Symptom:   
 
        select count from dfs.drill.`test2.json`;
@@ -48,7 +48,7 @@ Solution: Fix with correct syntax. See [Reserved Keywords]({{ site.baseurl }}/do
 
        select `count` from dfs.drill.`test2.json`;  
 
-**Tables not found**  
+### Tables not found  
 Symptom:
  
        select * from dfs.drill.test2.json;
@@ -68,7 +68,7 @@ Solutions:
  * Parquet
  * JSON
 
-**Access nested fields without table name/alias**  
+### Access nested fields without table name/alias  
 Symptom: 
 
        select x.y …  
@@ -77,7 +77,7 @@ Solution: Add table name or alias to the field reference:
 
        select t.x.y from t  
 
-**Unexpected null values for columns in results**  
+### Unexpected null values for columns in results  
 Symptom:  The following type of query returns NULL values:  
 
        select t.price from t 
@@ -86,7 +86,7 @@ Symptom:  The following type of query returns NULL values:
 Solution: Drill is schema-less system. Verify that column names are typed correctly.
 
 
-**Using functions with incorrect data types**  
+### Using functions with incorrect data types  
 
 Symptom: Example  
 
@@ -104,7 +104,7 @@ Symptom: Example
 
 Solution: Ensure that the function is invoked with the correct data type parameters. In the example above, c3 is an unsupported date type. 
 
-**Query takes a long time to return** 
+### Query takes a long time to return 
 
 Symptom: Query takes longer to return than expected.
 
@@ -114,7 +114,7 @@ Solution: Review the [query profile]({{ site.baseurl }}/docs/query-profiles/) an
  * Look at where Drill is currently spending time and try to optimize those operations.
  * Confirm that Drill is taking advantage of the nature of your data, including things like partition pruning and projection pushdown.
 
-**Schema changes**  
+### Schema changes**  
 
 Symptom:  
 
@@ -125,7 +125,7 @@ Symptom:
 
 Solution: Drill does not fully support schema changes.  In this case, you will need to either ensure that your schemas are the same or only select columns that share schema.
 
-**Timestamps and Timezones other than UTC**  
+### Timestamps and Timezones other than UTC  
 
 Symptoms: Issues with timestamp and timezone. Illegal instant due to time zone offset transition (America/New_York)
 
@@ -135,61 +135,61 @@ Solution: Convert data to UTC format. You are most likely trying to import date
 
  `http://www.openkb.info/2015/05/understanding-drills-timestamp-and.html `  
 
-**Unexpected ODBC issues**  
+### Unexpected ODBC issues  
 
 Symptom: ODBC errors.
 
 Solution: Make sure that the ODBC driver version is compatible with the server version. 
 Turn on ODBC driver debug logging to better understand failure.  
 
-**Connectivity issues when connecting via ZooKeeper for JDBC/ODBC**  
+### Connectivity issues when connecting via ZooKeeper for JDBC/ODBC  
 
 Symptom: Client cannot resolve ZooKeeper host names for JDBC/ODBC.
 
 Solution: Ensure that Zookeeper is up and running. Verify that Drill has the correct drill-override.conf settings for the Zookeeper quorum.
 
-**Metadata queries take a long time to return**  
+### Metadata queries take a long time to return  
 
 Symptom: Running SHOW databases/schemas/tables hangs (in general any information_schema queries hang).
 
 Solution: Disable incorrectly configured storage plugins or start appropriate services. Check compatibility matrix for the appropriate versions.  
 
-**Unexpected results due to implicit casting**  
+### Unexpected results due to implicit casting  
 
 Symptom: rill implicitly casts based on order of precedence.
 
 Solution: Review Drill casting behaviors and explicitly cast for the expected results. See [Data Types]({{ site.baseurl }}/docs/handling-different-data-types/).
 
-**Column alias causes an error**  
+### Column alias causes an error  
 
 Symptom: Drill is not case sensitive, and you can provide any alias for a column name. However, if the storage type is case sensitive, the alias name may conflict and cause errors.
 
 Solution: Verify that the column alias does not conflict with the storage type. See [Lexical Structures]({{ site.baseurl }}/docs/lexical-structure/#case-sensitivity).  
 
-**List (arrays) contains null**  
+### List (arrays) contains null  
 
 Symptom: UNSUPPORTED\_OPERATION ERROR: Null values are not supported in lists by default. Please set store.json.all\_text_mode to true to read lists containing nulls. Be advised that this will treat JSON null values as a string containing the word 'null'.
 
 Solution: Change Drill session settings to enable all_text_mode per message.  
 Avoid selecting fields that are arrays containing nulls.
 
-**SELECT COUNT (\*) takes a long time to run**  
+### SELECT COUNT (\*) takes a long time to run  
 
 Solution: In come cases, the underlying storage format does not have a built-in capability to return a count of records in a table.  In these cases, Drill will do a full scan of the data to verify the number of records.
 
-**Tableau issues**  
+### Tableau issues  
 
 Symptom: You see a lot of error messages in ODBC trace files or the performance is slow.
 
 Solution: Verify that you have installed the TDC file shipped with the ODBC driver.  
 
-**Group by using alias**  
+### Group by using alias  
 
 Symptom: Invalid column.
 
 Solution: Not supported. Use column name and/or expression directly.  
 
-**Casting a Varchar string to an integer results in an error**  
+### Casting a Varchar string to an integer results in an error  
 
 Symptom: 
 
@@ -197,7 +197,7 @@ Symptom:
 
 Solution: Per the ANSI SQL specification CAST to INT does not support empty strings.  If you want to change this behavior, you can set Drill to use the cast empty string to null behavior.  This can be done using the drill.exec.functions.cast_empty_string_to_null SESSION/SYSTEM option. 
  
-**Unexpected exception during fragment initialization**  
+### Unexpected exception during fragment initialization  
 
 Symptom: The error occurred during the Foreman phase of the query. The error typically occurs due to the following common causes:  
 
@@ -206,7 +206,7 @@ Symptom: The error occurred during the Foreman phase of the query. The error typ
 
 Solution: Enable the verbose errors option and run the query again to see if further insight is provided.  
 
-**Queries running out of memory**  
+### Queries running out of memory  
 
 Symptom: 
 
@@ -220,7 +220,7 @@ Solution:
 * Disable hash aggregation and hash sort for your session
 * See [Configuration Options]({{ site.baseurl }}/docs/configuration-options-introduction/)  
 
-**Unclear Error Message**  
+### Unclear Error Message  
 
 Symptom: Cannot determine issue from error message.
 
@@ -230,7 +230,7 @@ Solution: Turn on verbose errors.
 
 Determine your currently connected drillbit using select * from sys.drillbits.  Then review logs Drill logs from that drillbit.
 
-**SQLLine error starting Drill in embedded mode**  
+### SQLLine error starting Drill in embedded mode  
 
 Symptom:  
 

http://git-wip-us.apache.org/repos/asf/drill/blob/5deca3ed/_docs/architecture/030-performance.md
----------------------------------------------------------------------
diff --git a/_docs/architecture/030-performance.md b/_docs/architecture/030-performance.md
index 419e538..4da897e 100644
--- a/_docs/architecture/030-performance.md
+++ b/_docs/architecture/030-performance.md
@@ -43,7 +43,7 @@ process at a glance.
 
 ![drill compiler]({{ site.baseurl }}/docs/img/58.png)
 
-**Optimistic and pipelined query execution**
+**_Optimistic and pipelined query execution_**
 
 Drill adopts an optimistic execution model to process queries. Drill assumes
 that failures are infrequent within the short span of a query and therefore

http://git-wip-us.apache.org/repos/asf/drill/blob/5deca3ed/_docs/connect-a-data-source/020-storage-plugin-registration.md
----------------------------------------------------------------------
diff --git a/_docs/connect-a-data-source/020-storage-plugin-registration.md b/_docs/connect-a-data-source/020-storage-plugin-registration.md
index 1afe31e..5afbabf 100644
--- a/_docs/connect-a-data-source/020-storage-plugin-registration.md
+++ b/_docs/connect-a-data-source/020-storage-plugin-registration.md
@@ -8,16 +8,16 @@ You connect Drill to a file system, Hive, HBase, or other data source using stor
 
 The Drill installation registers the `cp`, `dfs`, `hbase`, `hive`, and `mongo` storage plugins instances by default.
 
-* `cp`
+* `cp`  
   Points to a JAR file in the Drill classpath that contains the Transaction Processing Performance Council (TPC) benchmark schema TPC-H that you can query. 
-* `dfs`
+* `dfs`  
   Points to the local file system on your machine, but you can configure this instance to
 point to any distributed file system, such as a Hadoop or S3 file system. 
-* `hbase`
+* `hbase`  
    Provides a connection to HBase/M7.
-* `hive`
+* `hive`  
    Integrates Drill with the Hive metadata abstraction of files, HBase/M7, and libraries to read data and operate on SerDes and UDFs.
-* `mongo`
+* `mongo`  
    Provides a connection to MongoDB data.
 
 In the Drill sandbox,  the `dfs` storage plugin connects you to the MapR File System (MFS). Using an installation of Drill instead of the sandbox, `dfs` connects you to the root of your file system.

http://git-wip-us.apache.org/repos/asf/drill/blob/5deca3ed/_docs/connect-a-data-source/035-plugin-configuration-basics.md
----------------------------------------------------------------------
diff --git a/_docs/connect-a-data-source/035-plugin-configuration-basics.md b/_docs/connect-a-data-source/035-plugin-configuration-basics.md
new file mode 100644
index 0000000..7844c28
--- /dev/null
+++ b/_docs/connect-a-data-source/035-plugin-configuration-basics.md
@@ -0,0 +1,198 @@
+---
+title: "Plugin Configuration Basics"
+parent: "Storage Plugin Configuration"
+---
+When you add or update storage plugin instances on one Drill node in a Drill
+cluster, Drill broadcasts the information to other Drill nodes 
+to synchronize the storage plugin configurations. You do not need to
+restart any of the Drillbits when you add or update a storage plugin instance.
+
+Use the Drill Web UI to update or add a new storage plugin. Launch a web browser, go to: `http://<IP address or host name>:8047`, and then go to the Storage tab. 
+
+To create and configure a new storage plugin:
+
+1. Enter a storage name in New Storage Plugin.
+   Each storage plugin registered with Drill must have a distinct
+name. Names are case-sensitive.
+2. Click Create.  
+3. In Configuration, configure attributes of the storage plugin, if applicable, using JSON formatting. The Storage Plugin Attributes table in the next section describes attributes typically reconfigured by users. 
+4. Click Create.
+
+Click Update to reconfigure an existing, enabled storage plugin.
+
+## Storage Plugin Attributes
+The following graphic shows key attributes of a typical dfs storage plugin:  
+![dfs plugin]({{ site.baseurl }}/docs/img/connect-plugin.png)
+## List of Attributes and Definitions
+The following table describes the attributes you configure for storage plugins. 
+<table>
+  <tr>
+    <th>Attribute</th>
+    <th>Example Values</th>
+    <th>Required</th>
+    <th>Description</th>
+  </tr>
+  <tr>
+    <td>"type"</td>
+    <td>"file"<br>"hbase"<br>"hive"<br>"mongo"</td>
+    <td>yes</td>
+    <td>A valid storage plugin type name.</td>
+  </tr>
+  <tr>
+    <td>"enabled"</td>
+    <td>true<br>false</td>
+    <td>yes</td>
+    <td>State of the storage plugin.</td>
+  </tr>
+  <tr>
+    <td>"connection"</td>
+    <td>"classpath:///"<br>"file:///"<br>"mongodb://localhost:27017/"<br>"maprfs:///"</td>
+    <td>implementation-dependent</td>
+    <td>Type of distributed file system, such as HDFS, Amazon S3, or files in your file system.</td>
+  </tr>
+  <tr>
+    <td>"workspaces"</td>
+    <td>null<br>"logs"</td>
+    <td>no</td>
+    <td>One or more unique workspace names. If a workspace name is used more than once, only the last definition is effective. </td>
+  </tr>
+  <tr>
+    <td>"workspaces". . . "location"</td>
+    <td>"location": "/Users/johndoe/mydata"<br>"location": "/tmp"</td>
+    <td>no</td>
+    <td>Full path to a directory on the file system.</td>
+  </tr>
+  <tr>
+    <td>"workspaces". . . "writable"</td>
+    <td>true<br>false</td>
+    <td>no</td>
+    <td>One or more unique workspace names. If defined more than once, the last workspace name overrides the others.</td>
+  </tr>
+  <tr>
+    <td>"workspaces". . . "defaultInputFormat"</td>
+    <td>null<br>"parquet"<br>"csv"<br>"json"</td>
+    <td>no</td>
+    <td>Format for reading data, regardless of extension. Default = Parquet.</td>
+  </tr>
+  <tr>
+    <td>"formats"</td>
+    <td>"psv"<br>"csv"<br>"tsv"<br>"parquet"<br>"json"<br>"avro"<br>"maprdb" *</td>
+    <td>yes</td>
+    <td>One or more valid file formats for reading. Drill implicitly detects formats of some files based on extension or bits of data in the file, others require configuration.</td>
+  </tr>
+  <tr>
+    <td>"formats" . . . "type"</td>
+    <td>"text"<br>"parquet"<br>"json"<br>"maprdb" *</td>
+    <td>yes</td>
+    <td>Format type. You can define two formats, csv and psv, as type "Text", but having different delimiters. </td>
+  </tr>
+  <tr>
+    <td>formats . . . "extensions"</td>
+    <td>["csv"]</td>
+    <td>format-dependent</td>
+    <td>Extensions of the files that Drill can read.</td>
+  </tr>
+  <tr>
+    <td>"formats" . . . "delimiter"</td>
+    <td>"\t"<br>","</td>
+    <td>format-dependent</td>
+    <td>One or more characters that separate records in a delimited text file, such as CSV. Use a 4-digit hex ascii code syntax \uXXXX for a non-printable delimiter. </td>
+  </tr>
+  <tr>
+    <td>"formats" . . . "fieldDelimiter"</td>
+    <td>","</td>
+    <td>no</td>
+    <td>A single character that separates each value in a column of a delimited text file.</td>
+  </tr>
+  <tr>
+    <td>"formats" . . . "quote"</td>
+    <td>"""</td>
+    <td>no</td>
+    <td>A single character that starts/ends a value in a delimited text file.</td>
+  </tr>
+  <tr>
+    <td>"formats" . . . "escape"</td>
+    <td>"`"</td>
+    <td>no</td>
+    <td>A single character that escapes the quote character.</td>
+  </tr>
+  <tr>
+    <td>"formats" . . . "comment"</td>
+    <td>"#"</td>
+    <td>no</td>
+    <td>The line decoration that starts a comment line in the delimited text file.</td>
+  </tr>
+  <tr>
+    <td>"formats" . . . "skipFirstLine"</td>
+    <td>true</td>
+    <td>no</td>
+    <td>To include or omits the header when reading a delimited text file.
+    </td>
+  </tr>
+</table>
+
+\* Pertains only to distributed drill installations using the mapr-drill package.  
+
+## Using the Formats
+
+You can use the following attributes when the `sys.options` property setting `exec.storage.enable_new_text_reader` is true (the default):
+
+* comment  
+* escape  
+* fieldDeliimiter  
+* quote  
+* skipFirstLine
+
+The "formats" apply to all workspaces defined in a storage plugin. A typical use case defines separate storage plugins for different root directories to query the files stored below the directory. An alternative use case defines multiple formats within the same storage plugin and names target files using different extensions to match the formats.
+
+The following example of a storage plugin for reading CSV files with the new text reader includes two formats for reading files having either a `csv` or `csv2` extension. The text reader does include the first line of column names in the queries of `.csv` files but does not include it in queries of `.csv2` files. 
+
+    "csv": {
+      "type": "text",
+      "extensions": [
+        "csv"
+      ],  
+      "delimiter": "," 
+    },  
+    "csv_with_header": {
+      "type": "text",
+      "extensions": [
+        "csv2"
+      ],  
+      "comment": "&",
+      "skipFirstLine": true,
+      "delimiter": "," 
+    },  
+
+## Using Other Attributes
+
+The configuration of other attributes, such as `size.calculator.enabled` in the hbase plugin and `configProps` in the hive plugin, are implementation-dependent and beyond the scope of this document.
+
+## Case-sensitive Names
+As previously mentioned, workspace and storage plugin names are case-sensitive. For example, the following query uses a storage plugin name `dfs` and a workspace name `clicks`. When you refer to `dfs.clicks` in an SQL statement, use the defined case:
+
+    0: jdbc:drill:> USE dfs.clicks;
+
+For example, using uppercase letters in the query after defining the storage plugin and workspace names using lowercase letters does not work. 
+
+## Storage Plugin REST API
+
+Drill provides a REST API that you can use to create a storage plugin. Use an HTTP POST and pass two properties:
+
+* name  
+  The plugin name. 
+
+* config  
+  The storage plugin definition as you would enter it in the Web UI.
+
+For example, this command creates a plugin named myplugin for reading files of an unknown type located on the root of the file system:
+
+    curl -X POST -/json" -d '{"name":"myplugin", "config": {"type": "file", "enabled": false, "connection": "file:///", "workspaces": { "root": { "location": "/", "writable": false, "defaultInputFormat": null}}, "formats": null}}' http://localhost:8047/storage/myplugin.json
+
+## Bootstrapping a Storage Plugin
+
+If you need to add a storage plugin to Drill and do not want to use a web browser, you can create a [bootstrap-storage-plugins.json](https://github.com/apache/drill/blob/master/contrib/storage-hbase/src/main/resources/bootstrap-storage-plugins.json) file and include it on the classpath when starting Drill. The storage plugin loads when Drill starts up.
+
+Bootstrapping a storage plugin works only when the first drillbit in the cluster first starts up. After cluster startup, you have to use the REST API or Drill Web UI to add a storage plugin. 
+
+If you configure an HBase storage plugin using bootstrap-storage-plugins.json file and HBase is not installed, you might experience a delay when executing the queries. Configure the [HBase client timeout](http://hbase.apache.org/book.html#config.files) and retry settings in the config block of HBase plugin instance configuration.

http://git-wip-us.apache.org/repos/asf/drill/blob/5deca3ed/_docs/connect-a-data-source/035-plugin-configuration-introduction.md
----------------------------------------------------------------------
diff --git a/_docs/connect-a-data-source/035-plugin-configuration-introduction.md b/_docs/connect-a-data-source/035-plugin-configuration-introduction.md
deleted file mode 100644
index c6bcbf8..0000000
--- a/_docs/connect-a-data-source/035-plugin-configuration-introduction.md
+++ /dev/null
@@ -1,199 +0,0 @@
----
-title: "Plugin Configuration Introduction"
-parent: "Storage Plugin Configuration"
----
-When you add or update storage plugin instances on one Drill node in a Drill
-cluster, Drill broadcasts the information to all of the other Drill nodes 
-to have identical storage plugin configurations. You do not need to
-restart any of the Drillbits when you add or update a storage plugin instance.
-
-Use the Drill Web UI to update or add a new storage plugin. Launch a web browser, go to: `http://<IP address or host name>:8047`, and then go to the Storage tab. 
-
-To create and configure a new storage plugin:
-
-1. Enter a storage name in New Storage Plugin.
-   Each storage plugin registered with Drill must have a distinct
-name. Names are case-sensitive.
-2. Click Create.  
-3. In Configuration, configure attributes of the storage plugin, if applicable, using JSON formatting. The Storage Plugin Attributes table in the next section describes attributes typically reconfigured by users. 
-4. Click Create.
-
-Click Update to reconfigure an existing, enabled storage plugin.
-
-## Storage Plugin Attributes
-The following graphic shows key attributes of a typical dfs storage plugin:  
-![dfs plugin]({{ site.baseurl }}/docs/img/connect-plugin.png)
-## List of Attributes and Definitions
-The following table describes the attributes you configure for storage plugins. 
-<table>
-  <tr>
-    <th>Attribute</th>
-    <th>Example Values</th>
-    <th>Required</th>
-    <th>Description</th>
-  </tr>
-  <tr>
-    <td>"type"</td>
-    <td>"file"<br>"hbase"<br>"hive"<br>"mongo"</td>
-    <td>yes</td>
-    <td>A valid storage plugin type name.</td>
-  </tr>
-  <tr>
-    <td>"enabled"</td>
-    <td>true<br>false</td>
-    <td>yes</td>
-    <td>State of the storage plugin.</td>
-  </tr>
-  <tr>
-    <td>"connection"</td>
-    <td>"classpath:///"<br>"file:///"<br>"mongodb://localhost:27017/"<br>"maprfs:///"</td>
-    <td>implementation-dependent</td>
-    <td>Type of distributed file system, such as HDFS, Amazon S3, or files in your file system.</td>
-  </tr>
-  <tr>
-    <td>"workspaces"</td>
-    <td>null<br>"logs"</td>
-    <td>no</td>
-    <td>One or more unique workspace names. If a workspace name is used more than once, only the last definition is effective. </td>
-  </tr>
-  <tr>
-    <td>"workspaces". . . "location"</td>
-    <td>"location": "/Users/johndoe/mydata"<br>"location": "/tmp"</td>
-    <td>no</td>
-    <td>Full path to a directory on the file system.</td>
-  </tr>
-  <tr>
-    <td>"workspaces". . . "writable"</td>
-    <td>true<br>false</td>
-    <td>no</td>
-    <td>One or more unique workspace names. If defined more than once, the last workspace name overrides the others.</td>
-  </tr>
-  <tr>
-    <td>"workspaces". . . "defaultInputFormat"</td>
-    <td>null<br>"parquet"<br>"csv"<br>"json"</td>
-    <td>no</td>
-    <td>Format for reading data, regardless of extension. Default = Parquet.</td>
-  </tr>
-  <tr>
-    <td>"formats"</td>
-    <td>"psv"<br>"csv"<br>"tsv"<br>"parquet"<br>"json"<br>"avro"<br>"maprdb" *</td>
-    <td>yes</td>
-    <td>One or more valid file formats for reading. Drill implicitly detects formats of some files based on extension or bits of data in the file, others require configuration.</td>
-  </tr>
-  <tr>
-    <td>"formats" . . . "type"</td>
-    <td>"text"<br>"parquet"<br>"json"<br>"maprdb" *</td>
-    <td>yes</td>
-    <td>Format type. You can define two formats, csv and psv, as type "Text", but having different delimiters. </td>
-  </tr>
-  <tr>
-    <td>formats . . . "extensions"</td>
-    <td>["csv"]</td>
-    <td>format-dependent</td>
-    <td>Extensions of the files that Drill can read.</td>
-  </tr>
-  <tr>
-    <td>"formats" . . . "delimiter"</td>
-    <td>"\t"<br>","</td>
-    <td>format-dependent</td>
-    <td>One or more characters that separate records in a delimited text file, such as CSV. Use a 4-digit hex ascii code syntax \uXXXX for a non-printable delimiter. </td>
-  </tr>
-  <tr>
-    <td>"formats" . . . "fieldDelimiter"</td>
-    <td>","</td>
-    <td>no</td>
-    <td>A single character that separates each value in a column of a delimited text file.</td>
-  </tr>
-  <tr>
-    <td>"formats" . . . "quote"</td>
-    <td>"""</td>
-    <td>no</td>
-    <td>A single character that starts/ends a value in a delimited text file.</td>
-  </tr>
-  <tr>
-    <td>"formats" . . . "escape"</td>
-    <td>"`"</td>
-    <td>no</td>
-    <td>A single character that escapes the quote character.</td>
-  </tr>
-  <tr>
-    <td>"formats" . . . "comment"</td>
-    <td>"#"</td>
-    <td>no</td>
-    <td>The line decoration that starts a comment line in the delimited text file.</td>
-  </tr>
-  <tr>
-    <td>"formats" . . . "skipFirstLine"</td>
-    <td>true</td>
-    <td>no</td>
-    <td>To include or omits the header when reading a delimited text file.
-    </td>
-  </tr>
-</table>
-
-\* Pertains only to distributed drill installations using the mapr-drill package.  
-
-## Using the Formats
-
-You can use the following attributes when the `sys.options` property setting `exec.storage.enable_new_text_reader` is true (the default):
-
-* comment  
-* escape  
-* fieldDeliimiter  
-* quote  
-* skipFirstLine
-
-The "formats" apply to all workspaces defined in a storage plugin. A typical use case defines separate storage plugins for different root directories to query the files stored below the directory. An alternative use case defines multiple formats within the same storage plugin and names target files using different extensions to match the formats.
-
-The following example of a storage plugin for reading CSV files with the new text reader includes two formats for reading files having either a `csv` or `csv2` extension. The text reader does include the first line of column names in the queries of `.csv` files but does not include it in queries of `.csv2` files. 
-
-    "csv": {
-      "type": "text",
-      "extensions": [
-        "csv"
-      ],  
-      "delimiter": "," 
-    },  
-    "csv_with_header": {
-      "type": "text",
-      "extensions": [
-        "csv2"
-      ],  
-      "comment": "&",
-      "skipFirstLine": true,
-      "delimiter": "," 
-    },  
-
-## Using Other Attributes
-
-The configuration of other attributes, such as `size.calculator.enabled` in the hbase plugin and `configProps` in the hive plugin, are implementation-dependent and beyond the scope of this document.
-
-## Case-sensitive Names
-As previously mentioned, workspace and storage plugin names are case-sensitive. For example, the following query uses a storage plugin name `dfs` and a workspace name `clicks`. When you refer to `dfs.clicks` in an SQL statement, use the defined case:
-
-    0: jdbc:drill:> USE dfs.clicks;
-
-For example, using uppercase letters in the query after defining the storage plugin and workspace names using lowercase letters does not work. 
-
-## Storage Plugin REST API
-
-Drill provides a REST API that you can use to create a storage plugin. Use an HTTP POST and pass two properties:
-
-* name  
-  The plugin name. 
-
-* config  
-  The storage plugin definition as you would enter it in the Web UI.
-
-For example, this command creates a plugin named myplugin for reading files of an unknown type located on the root of the file system:
-
-    curl -X POST -/json" -d '{"name":"myplugin", "config": {"type": "file", "enabled": false, "connection": "file:///", "workspaces": { "root": { "location": "/", "writable": false, "defaultInputFormat": null}}, "formats": null}}' http://localhost:8047/storage/myplugin.json
-
-## Bootstrapping a Storage Plugin
-
-If you need to add a storage plugin to Drill and do not want to use a web browser, you can create a [bootstrap-storage-plugins.json](https://github.com/apache/drill/blob/master/contrib/storage-hbase/src/main/resources/bootstrap-storage-plugins.json) file and include it on the classpath when starting Drill. The storage plugin loads when Drill starts up.
-
-Bootstrapping a storage plugin works only when the first drillbit in the cluster first starts up. After cluster startup, you have to use the REST API or Drill Web UI to add a storage plugin. 
-
-
-If you configure an HBase storage plugin using bootstrap-storage-plugins.json file and HBase is not installed, you might experience a delay when executing the queries. Configure the [HBase client timeout](http://hbase.apache.org/book.html#config.files) and retry settings in the config block of HBase plugin instance configuration.

http://git-wip-us.apache.org/repos/asf/drill/blob/5deca3ed/_docs/connect-a-data-source/040-file-system-storage-plugin.md
----------------------------------------------------------------------
diff --git a/_docs/connect-a-data-source/040-file-system-storage-plugin.md b/_docs/connect-a-data-source/040-file-system-storage-plugin.md
index 9f16bde..d7e2579 100644
--- a/_docs/connect-a-data-source/040-file-system-storage-plugin.md
+++ b/_docs/connect-a-data-source/040-file-system-storage-plugin.md
@@ -4,21 +4,17 @@ parent: "Storage Plugin Configuration"
 ---
 You can register a storage plugin instance that connects Drill to a local file system or to a distributed file system registered in `core-site.xml`, such as S3
 or HDFS. By
-default, Drill includes an instance named `dfs` that points to the local file
-system on your machine. 
+default, Apache Drill includes an storage plugin named `dfs` that points to the local file
+system on your machine by default. 
 
 ## Connecting Drill to a File System
 
-In a Drill cluster, you typically do not query the local file system, but instead place files on the distributed file system. You configure the connection property of the storage plugin workspace to connect Drill to a distributed file system. For example, the following connection properties connect Drill to an HDFS, MapR-FS, or Mongo-DB cluster:
+In a Drill cluster, you typically do not query the local file system, but instead place files on the distributed file system. You configure the connection property of the storage plugin workspace to connect Drill to a distributed file system. For example, the following connection properties connect Drill to an HDFS or MapR-FS cluster:
 
 * HDFS  
   `"connection": "hdfs://<IP Address>:<Port>/"`  
 * MapR-FS Remote Cluster  
   `"connection": "maprfs://<IP Address>/"`  
-* Mongo-DB Cluster  
-  `"connection": "mongodb://<IP Address>:<Port>/"
-
-The Drill installation includes a [Mongo-DB storage plugin]({{site.baseurl}}/docs/mongodb-plugin-for-apache-drill).
 
 To register a local or a distributed file system with Apache Drill, complete
 the following steps:
@@ -69,7 +65,7 @@ the following steps:
 name node and the port number.
   4. Click **Enable**.
 
-Once you have configured a storage plugin instance for the file system, you
+After you have configured a storage plugin instance for the file system, you
 can issue Drill queries against it.
 
 The following example shows an instance of a file type storage plugin with a

http://git-wip-us.apache.org/repos/asf/drill/blob/5deca3ed/_docs/connect-a-data-source/080-drill-default-input-format.md
----------------------------------------------------------------------
diff --git a/_docs/connect-a-data-source/080-drill-default-input-format.md b/_docs/connect-a-data-source/080-drill-default-input-format.md
index e2922ab..f8e4e66 100644
--- a/_docs/connect-a-data-source/080-drill-default-input-format.md
+++ b/_docs/connect-a-data-source/080-drill-default-input-format.md
@@ -61,9 +61,4 @@ steps:
 
 ## Querying Compressed JSON
 
-You can query compressed JSON in .gz files as well as uncompressed files having the .json extension. First, add the gz extension to a storage plugin, and then use that plugin to query the compressed file.
-
-      "extensions": [
-        "json",
-        "gz"
-      ]
+You can query compressed JSON in .gz files as well as uncompressed files having the .json extension. The .json extension must precede the gz extension in the GZ file name. For example, `proddata.json.gz`.

http://git-wip-us.apache.org/repos/asf/drill/blob/5deca3ed/_docs/connect-a-data-source/090-mongodb-plugin-for-apache-drill.md
----------------------------------------------------------------------
diff --git a/_docs/connect-a-data-source/090-mongodb-plugin-for-apache-drill.md b/_docs/connect-a-data-source/090-mongodb-plugin-for-apache-drill.md
index 72bdbeb..450d079 100644
--- a/_docs/connect-a-data-source/090-mongodb-plugin-for-apache-drill.md
+++ b/_docs/connect-a-data-source/090-mongodb-plugin-for-apache-drill.md
@@ -5,11 +5,7 @@ parent: "Connect a Data Source"
 ## Overview
 
 Drill provides a mongodb format plugin to connect to MongoDB, and run queries
-to read, but not to write, the Mongo data ANSI SQL. Attempting to write data back to Mongo results in an error. You do not need any upfront schema definitions. 
-
-This procedures in this section assume that you have Drill installed locally (embedded mode),
-as well as MongoDB. Examples in this tutorial use zip code aggregation data
-provided by MongoDB. Before You Begin provides links to download tools and data.
+to read, but not write, the Mongo data using Drill. Attempting to write data back to Mongo results in an error. You do not need any upfront schema definitions. 
 
 {% include startnote.html %}A local instance of Drill is used in this tutorial for simplicity. {% include endnote.html %}
 
@@ -18,11 +14,11 @@ You can also run Drill and MongoDB together in distributed mode.
 ### Before You Begin
 
 Before you can query MongoDB with Drill, you must have Drill and MongoDB
-installed on your machine. You may also want to import the MongoDB zip code
-data to run the example queries on your machine.
+installed on your machine. Examples in this tutorial use zip code aggregation data
+provided by MongoDB that you download in the following steps:
 
-  1. [Install Drill]({{ site.baseurl }}/docs/installing-drill-in-embedded-mode), if you do not already have it installed on your machine.
-  2. [Install MongoDB](http://docs.mongodb.org/manual/installation), if you do not already have it installed on your machine.
+  1. [Install Drill]({{ site.baseurl }}/docs/installing-drill-in-embedded-mode), if you do not already have it installed.
+  2. [Install MongoDB](http://docs.mongodb.org/manual/installation), if you do not already have it installed.
   3. [Import the MongoDB zip code sample data set](http://docs.mongodb.org/manual/tutorial/aggregation-zip-code-data-set). You can use Mongo Import to get the data. 
 
 ## Configuring MongoDB
@@ -32,13 +28,12 @@ UI to connect to Drill. Drill must be running in order to access the Web UI.
 
 Complete the following steps to configure MongoDB as a data source for Drill:
 
-  1. [Start the Drill]({{site.baseurl}}/docs/starting-drill-on-linux-and-mac-os-x/) shell for your environment.
+  1. [Start the Drill shell]({{site.baseurl}}/docs/starting-drill-on-linux-and-mac-os-x/).
 
-     Do not enter any commands. You will return to the command prompt after
-completing the configuration in the Drill Web UI.
+     The Drill shell needs to be running to access the Drill Web UI.
   2. Open a browser window, and navigate to the Drill Web UI at `http://localhost:8047`.
   3. In the navigation bar, click **Storage**.
-  4. Under Disabled Storage Plugins, select **Update** next to the `mongo` instance if the instance exists. If the instance does not exist, create an instance for MongoDB.
+  4. Under Disabled Storage Plugins, select **Update** next to the `mongo` storage plugin.
   5. In the Configuration window, verify that `"enabled"` is set to ``"true."``
 
      **Example**
@@ -50,12 +45,11 @@ completing the configuration in the Drill Web UI.
         }
 
      {% include startnote.html %}27017 is the default port for `mongodb` instances.{% include endnote.html %} 
-  6. Click **Enable** to enable the instance, and save the configuration.
-  7. Navigate back to the Drill command line so you can query MongoDB.
+  6. Click **Enable** to enable the storage plugin, and save the configuration.
 
 ## Querying MongoDB
 
-You can issue the `SHOW DATABASES `command to see a list of databases from all
+In the Drill shell, you can issue the `SHOW DATABASES `command to see a list of databases from all
 Drill data sources, including MongoDB. If you downloaded the zip codes file,
 you should see `mongo.zipdb` in the results.
 

http://git-wip-us.apache.org/repos/asf/drill/blob/5deca3ed/_docs/connect-a-data-source/100-mapr-db-format.md
----------------------------------------------------------------------
diff --git a/_docs/connect-a-data-source/100-mapr-db-format.md b/_docs/connect-a-data-source/100-mapr-db-format.md
index b091f8e..e1ac5aa 100644
--- a/_docs/connect-a-data-source/100-mapr-db-format.md
+++ b/_docs/connect-a-data-source/100-mapr-db-format.md
@@ -4,7 +4,7 @@ parent: "Connect a Data Source"
 ---
 The MapR-DB format is not included in Apache Drill release. If you install Drill from the `mapr-drill` package on a MapR node, the MapR-DB format appears in the `dfs` storage plugin instance. The `maprdb` format improves the
 estimated number of rows that Drill uses to plan a query. It also enables you
-to query tables like you would query files in a file system because MapR-DB
+to query tables as you would query files in a file system because MapR-DB
 and MapR-FS share the same namespace.
 
 You can query tables stored across multiple directories. You do not need to

http://git-wip-us.apache.org/repos/asf/drill/blob/5deca3ed/_docs/data-sources-and-file-formats/050-json-data-model.md
----------------------------------------------------------------------
diff --git a/_docs/data-sources-and-file-formats/050-json-data-model.md b/_docs/data-sources-and-file-formats/050-json-data-model.md
index 0c51757..1e68783 100644
--- a/_docs/data-sources-and-file-formats/050-json-data-model.md
+++ b/_docs/data-sources-and-file-formats/050-json-data-model.md
@@ -215,11 +215,8 @@ Sum the ticket sales by combining the `SUM`, `FLATTEN`, and `KVGEN` functions in
     1 row selected (0.244 seconds)
 
 ### Example: Aggregate and Sort Data
-<<<<<<< HEAD
-Sum the ticket sales by state and group by day and sort in ascending order.
-=======
+
 Sum and group the ticket sales by date and sort in ascending order of total tickets sold.
->>>>>>> d6f216a60b04b5366a3f3905450988597a421118
 
     SELECT `right`(tkt.tot_sales.key,2) `December Date`,
     SUM(tkt.tot_sales.`value`) AS TotalSales
@@ -365,14 +362,14 @@ Use dot notation, for example `t.birth.lastname` and `t.birth.bearer.max_hdl` to
 ## Limitations and Workarounds
 In most cases, you can use a workaround, presented in the following sections, to overcome the following limitations:
 
-* Array at the root level
-* Complex nested data
-* Empty array
-* Lengthy JSON objects
-* Complex JSON objects
-* Nested column names
-* Schema changes
-* Selecting all in a JSON directory query
+* [Array at the root level]({{site.baseurl}}/docs/json-data-model/#array-at-the-root-level)
+* [Complex nested data]({{site.baseurl}}/docs/json-data-model/#complex-nested-data)
+* [Empty array]({{site.baseurl}}/docs/json-data-model/#empty-array)
+* [Lengthy JSON objects]({{site.baseurl}}/docs/json-data-model/#lengthy-json-objects)
+* [Complex JSON objects]({{site.baseurl}}/docs/json-data-model/#complex-json-objects)
+* [Nested column names]({{site.baseurl}}/docs/json-data-model/#nested-column-names)
+* [Schema changes]({{site.baseurl}}/docs/json-data-model/#schema-changes)
+* [Selecting all in a JSON directory query]({{site.baseurl}}/docs/json-data-model/#selecting-all-in-a-json-directory-query)
 
 ### Array at the root level
 Drill cannot read an array at the root level, outside an object.

http://git-wip-us.apache.org/repos/asf/drill/blob/5deca3ed/_docs/install/045-distributed-mode-prerequisites.md
----------------------------------------------------------------------
diff --git a/_docs/install/045-distributed-mode-prerequisites.md b/_docs/install/045-distributed-mode-prerequisites.md
new file mode 100644
index 0000000..e2c7265
--- /dev/null
+++ b/_docs/install/045-distributed-mode-prerequisites.md
@@ -0,0 +1,26 @@
+---
+title: "Distributed Mode Prerequisites"
+parent: "Installing Drill in Distributed Mode"
+---
+You can install Apache Drill on one or more nodes to
+run it in a clustered environment.
+
+## Prerequisites
+
+Before you install Apache Drill on nodes in your cluster, install and configure the
+following software and services:
+
+  * [Oracle JDK version 7](http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html) (Required)
+  * Configured and running a ZooKeeper quorum (Required)
+  * Configured and running a Hadoop cluster (Recommended)
+  * DNS (Recommended)
+
+To install Apache Drill in distributed mode, complete the following steps:
+
+  1. Install Drill on nodes in the cluster.
+  2. Configure a cluster ID and add Zookeeper information.
+  3. Connect Drill to your data sources.
+
+## Connecting Drill to Distributed Data Sources
+
+In a Drill cluster, you typically do not query the local file system, but instead query files on the distributed file system, databases supported through a storage plugin, or the Hive metastore. You use a [storage plugin]({{site.baseurl}}/docs/connect-a-data-source) to connect to these data sources.

http://git-wip-us.apache.org/repos/asf/drill/blob/5deca3ed/_docs/install/045-embedded-mode-prerequisites.md
----------------------------------------------------------------------
diff --git a/_docs/install/045-embedded-mode-prerequisites.md b/_docs/install/045-embedded-mode-prerequisites.md
deleted file mode 100644
index 73f4ab0..0000000
--- a/_docs/install/045-embedded-mode-prerequisites.md
+++ /dev/null
@@ -1,23 +0,0 @@
----
-title: "Distributed Mode Prerequisites"
-parent: "Installing Drill in Distributed Mode"
----
-You can install Apache Drill in distributed mode on one or multiple nodes to
-run it in a clustered environment.
-
-To install Apache Drill in distributed mode, complete the following steps:
-
-  1. Install Drill on each designated node in the cluster.
-  2. Configure a cluster ID and add Zookeeper information.
-  3. Connect Drill to your data sources. 
-
-
-**Prerequisites**
-
-Before you install Apache Drill on nodes in your cluster, install and configure the
-following software and services:
-
-  * [Oracle JDK version 7](http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html) (Required)
-  * Configured and running a ZooKeeper quorum (Required)
-  * Configured and running a Hadoop cluster (Recommended)
-  * DNS (Recommended)

http://git-wip-us.apache.org/repos/asf/drill/blob/5deca3ed/_docs/install/installing-drill-in-embedded-mode/030-starting-drill-on-linux-and-mac-os-x.md
----------------------------------------------------------------------
diff --git a/_docs/install/installing-drill-in-embedded-mode/030-starting-drill-on-linux-and-mac-os-x.md b/_docs/install/installing-drill-in-embedded-mode/030-starting-drill-on-linux-and-mac-os-x.md
index 06883c1..e29c593 100644
--- a/_docs/install/installing-drill-in-embedded-mode/030-starting-drill-on-linux-and-mac-os-x.md
+++ b/_docs/install/installing-drill-in-embedded-mode/030-starting-drill-on-linux-and-mac-os-x.md
@@ -16,7 +16,7 @@ Start the Drill shell using the `drill-embedded` command. The command uses a jdb
 
    At this point, you can [run queries]({{site.baseurl}}/docs/query-data).
 
-You can also use the **sqlline** command to start Drill using a custom connection string, as described in ["Using an Ad-Hoc Connection to Drill"]({{site.baseurl}}/docs/starting-drill-in-distributed-mode/#using-an-ad-hoc-connection-to-drill). For example, you can specify the storage plugin when you start the shell. Doing so eliminates the need to specify the storage plugin in the query: For example, this command specifies the `dfs` storage plugin.
+To start Drill, you can also use the **sqlline** command and a custom connection string, as described in detail in ["Using an Ad-Hoc Connection to Drill"]({{site.baseurl}}/docs/starting-drill-in-distributed-mode/#using-an-ad-hoc-connection-to-drill). For example, you can specify the storage plugin when you start the shell. Doing so eliminates the need to specify the storage plugin in the query. For example, this command specifies the `dfs` storage plugin:
 
     bin/sqlline –u jdbc:drill:schema=dfs;zk=local
 

http://git-wip-us.apache.org/repos/asf/drill/blob/5deca3ed/_docs/install/installing-drill-in-embedded-mode/050-starting-drill-on-windows.md
----------------------------------------------------------------------
diff --git a/_docs/install/installing-drill-in-embedded-mode/050-starting-drill-on-windows.md b/_docs/install/installing-drill-in-embedded-mode/050-starting-drill-on-windows.md
index 3f8caef..20cc52d 100644
--- a/_docs/install/installing-drill-in-embedded-mode/050-starting-drill-on-windows.md
+++ b/_docs/install/installing-drill-in-embedded-mode/050-starting-drill-on-windows.md
@@ -15,7 +15,7 @@ Start the Drill shell using the **sqlline command**. The `zk=local` means the lo
 
 At this point, you can [submit queries]({{ site.baseurl }}/docs/drill-in-10-minutes#query-sample-data) to Drill.
 
-You can use the schema option in the **sqlline** command to specify a storage plugin. Specifying the storage plugin when you start up eliminates the need to specify the storage plugin in the query: For example, this command specifies the `dfs` storage plugin.
+You can use the schema option in the **sqlline** command to specify a storage plugin. Specifying the storage plugin when you start up eliminates the need to specify the storage plugin in the query. For example, this command specifies the `dfs` storage plugin:
 
     C:\bin\sqlline sqlline.bat –u "jdbc:drill:schema=dfs;zk=local"
 

http://git-wip-us.apache.org/repos/asf/drill/blob/5deca3ed/_docs/odbc-jdbc-interfaces/020-using-jdbc-with-squirrel-on-windows.md
----------------------------------------------------------------------
diff --git a/_docs/odbc-jdbc-interfaces/020-using-jdbc-with-squirrel-on-windows.md b/_docs/odbc-jdbc-interfaces/020-using-jdbc-with-squirrel-on-windows.md
index 9d7d3fb..f604dfe 100755
--- a/_docs/odbc-jdbc-interfaces/020-using-jdbc-with-squirrel-on-windows.md
+++ b/_docs/odbc-jdbc-interfaces/020-using-jdbc-with-squirrel-on-windows.md
@@ -3,7 +3,7 @@ title: "Using JDBC with SQuirreL on Windows"
 parent: "ODBC/JDBC Interfaces"
 ---
 To use the JDBC Driver to access Drill through SQuirreL, ensure that you meet the prerequisites and follow the steps in this section.
-### Prerequisites
+## Prerequisites
 
   * SQuirreL requires JRE 7
   * Drill installed in distributed mode on one or multiple nodes in a cluster. Refer to the [Install Drill]({{ site.baseurl }}/docs/install-drill/) documentation for more information.
@@ -18,7 +18,7 @@ If a DNS entry does not exist, create the entry for the Drill node(s).
 
 ----------
 
-### Step 1: Getting the Drill JDBC Driver
+## Step 1: Getting the Drill JDBC Driver
 
 The Drill JDBC Driver `JAR` file must exist in a directory on your Windows
 machine in order to configure the driver in the SQuirreL client.
@@ -39,7 +39,7 @@ you can locate the driver in the following directory:
 
 ----------
 
-### Step 2: Installing and Starting SQuirreL
+## Step 2: Installing and Starting SQuirreL
 
 To install and start SQuirreL, complete the following steps:
 
@@ -50,14 +50,14 @@ To install and start SQuirreL, complete the following steps:
 
 ----------
 
-### Step 3: Adding the Drill JDBC Driver to SQuirreL
+## Step 3: Adding the Drill JDBC Driver to SQuirreL
 
 To add the Drill JDBC Driver to SQuirreL, define the driver and create a
 database alias. The alias is a specific instance of the driver configuration.
 SQuirreL uses the driver definition and alias to connect to Drill so you can
 access data sources that you have registered with Drill.
 
-#### A. Define the Driver
+### A. Define the Driver
 
 To define the Drill JDBC Driver, complete the following steps:
 
@@ -79,7 +79,7 @@ To define the Drill JDBC Driver, complete the following steps:
 
    ![drill query flow]({{ site.baseurl }}/docs/img/52.png)
 
-#### B. Create an Alias
+### B. Create an Alias
 
 To create an alias, complete the following steps:
 
@@ -88,43 +88,14 @@ To create an alias, complete the following steps:
     
     ![drill query flow]({{ site.baseurl }}/docs/img/19.png)
     
-3. Enter the following information:
-  
-     <table style='table-layout:fixed;width:100%'><tbody><tr>
-     <td valign="top" width="10%"><strong>Option</strong></td>
-     <td valign="top" style='width: 500px;'><strong>Description</strong></td>
-     </tr>
-     <tr>
-     <td valign="top">Alias Name</td>
-     <td valign="top">A unique name for the Drill JDBC Driver alias.</td>
-     </tr>
-     <tr>
-     <td valign="top">Driver</td>
-     <td valign="top">Select the Drill JDBC Driver.</td>
-     </tr>
-     <tr>
-     <td valign="top">URL</td>
-     <td valign="top">Enter the connection URL with the name of the Drill directory stored in ZooKeeper and the cluster ID:
-       <code>jdbc:drill:zk=&lt;<em>zookeeper_quorum</em>&gt;/&lt;drill_directory_in_zookeeper&gt;/&lt;cluster_ID&gt;;schema=&lt;<em>schema_to_use_as_default</em>&gt;</code>
-       <em>The following examples show URLs for Drill installed on a single node:</em><br />
-       <span style="font-family: monospace;font-size: 14.0px;line-height: 1.4285715;background-color: transparent;">jdbc:drill:zk=10.10.100.56:5181/drill/demo_mapr_com-drillbits;schema=hive<br /></span>
-       <span style="font-family: monospace;font-size: 14.0px;line-height: 1.4285715;background-color: transparent;">jdbc:drill:zk=10.10.100.24:2181/drill/drillbits1;schema=hive<br /> </span>
-       <em>The following example shows a URL for Drill installed in distributed mode with a connection to a ZooKeeper quorum:</em>
-       <span style="font-family: monospace;font-size: 14.0px;line-height: 1.4285715;background-color: transparent;">jdbc:drill:zk=10.10.100.30:5181,10.10.100.31:5181,10.10.100.32:5181/drill/drillbits1;schema=hive</span>
-          <ul>
-          <li>Including a default schema is optional.</li>
-          <li>The ZooKeeper port is 2181. In a MapR cluster, the ZooKeeper port is 5181.</li>
-          <li>The Drill directory stored in ZooKeeper is <code>/drill</code>.</li>
-          <li>The Drill default cluster ID is<code> drillbits1</code>.</li>
-          </ul>
-     </td></tr><tr>
-     <td valign="top">User Name</td>
-     <td valign="top">admin</td>
-     </tr>
-     <tr>
-     <td valign="top">Password</td>
-     <td valign="top">admin</td>
-     </tr></tbody></table>
+3. Enter the following information:  
+
+   * Alias Name: A unique name for the Drill JDBC Driver alias  
+   * Driver: Select the Drill JDBC Driver  
+   * URL: Enter the connection URL with the name of the Drill directory stored in ZooKeeper and the cluster ID, as shown in the [next section]({{site.baseurl}}/docs/using-jdbc-with-squirrel-on-windows/#entering-the-connection-url).  
+   * User Name: admin  
+   * Password: admin  
+
 4. Click **Ok**. The Connect to: dialog box appears.  
 
     ![drill query flow]({{ site.baseurl }}/docs/img/30.png)
@@ -135,9 +106,28 @@ To create an alias, complete the following steps:
      
 6. Click **OK**. SQuirreL displays a series of tabs.
 
+### Entering the Connection URL  
+In step 3 of the procedure to create an alias, use the following syntax to enter the connection URL that includes the name of the Drill directory stored in ZooKeeper and the cluster ID:  
+
+     jdbc:drill:zk=<zookeeper_quorum>/<drill_directory_in_zookeeper>/<cluster_ID>;schema=<schema_to_use_as_default>
+
+The following examples show URLs for Drill installed on a single node:
+
+     jdbc:drill:zk=10.10.100.56:5181/drill/demo_mapr_com-drillbits;schema=hive
+     jdbc:drill:zk=10.10.100.24:2181/drill/drillbits1;schema=hive
+
+The following example shows a URL for Drill installed in distributed mode with a connection to a ZooKeeper quorum:
+ 
+     jdbc:drill:zk=10.10.100.30:5181,10.10.100.31:5181,10.10.100.32:5181/drill/drillbits1;schema=hive
+
+* Including a default schema is optional.
+* The ZooKeeper port is 2181. In a MapR cluster, the ZooKeeper port is 5181.
+* The Drill directory stored in ZooKeeper is `/drill`.
+* The Drill default cluster ID is drillbits1.
+
 ----------
 
-### Step 4: Running a Drill Query from SQuirreL
+## Step 4: Running a Drill Query from SQuirreL
 
 Once you have SQuirreL successfully connected to your cluster through the
 Drill JDBC Driver, you can issue queries from the SQuirreL client. You can run

http://git-wip-us.apache.org/repos/asf/drill/blob/5deca3ed/_docs/query-data/query-a-file-system/030-querying-plain-text-files.md
----------------------------------------------------------------------
diff --git a/_docs/query-data/query-a-file-system/030-querying-plain-text-files.md b/_docs/query-data/query-a-file-system/030-querying-plain-text-files.md
index f79f2b9..c17ac33 100644
--- a/_docs/query-data/query-a-file-system/030-querying-plain-text-files.md
+++ b/_docs/query-data/query-a-file-system/030-querying-plain-text-files.md
@@ -164,7 +164,7 @@ a file to have this extension. Later, you learn how to skip this step and query
 Get data about "Zoological Journal of the Linnean" that appears more than 250
 times a year in the books that Google scans.
 
-  1. Switch back to using the `dfs` storage plugin.
+  1. Use the `dfs` storage plugin.
   
           USE dfs;
 

http://git-wip-us.apache.org/repos/asf/drill/blob/5deca3ed/_docs/sql-reference/data-types/010-supported-data-types.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/data-types/010-supported-data-types.md b/_docs/sql-reference/data-types/010-supported-data-types.md
index 3fda60b..a30c4b9 100644
--- a/_docs/sql-reference/data-types/010-supported-data-types.md
+++ b/_docs/sql-reference/data-types/010-supported-data-types.md
@@ -22,7 +22,7 @@ Drill reads from and writes to data sources having a wide variety of types. Dril
 | CHARACTER VARYING, CHARACTER, CHAR,*** or VARCHAR | UTF8-encoded variable-length string. The default limit is 1 character. The maximum character limit is 2,147,483,647. | CHAR(30) casts data to a 30-character string maximum.                          |
 
 
-\* In this release, Drill disables the DECIMAL data type, including casting to DECIMAL and reading DECIMAL types from Parquet and Hive. The NUMERIC data type is an alias for the DECIMAL data type.  
+\* In this release, Drill disables the DECIMAL data type (an alpha feature), including casting to DECIMAL and reading DECIMAL types from Parquet and Hive. The NUMERIC data type is an alias for the DECIMAL data type.  
 \*\* Not currently supported.  
 \*\*\* Currently, Drill supports only variable-length strings.  
 
@@ -93,29 +93,29 @@ In some cases, Drill converts schema-less data to correctly-typed data implicitl
 * Text: CSV, TSV, and other text  
   Implicitly casts all textual data to VARCHAR.
 
-## Explicit Casting Precedence of Data Types
+## Implicit Casting Precedence of Data Types
 
 The following list includes data types Drill uses in descending order of precedence. Casting precedence shown in the following table applies to the implicit casting that Drill performs. For example, Drill might implicitly cast data when a query includes a function or filter on mismatched data types:
 
     SELECT myBigInt FROM mytable WHERE myBigInt = 2.5;
 
-As shown in the table, you can cast a NULL value, which has the lowest precedence, to any other type; you can cast a SMALLINT (not supported in this release) value to INT. Drill might deviate from these precedence rules for performance reasons. Under certain circumstances, such as queries involving SUBSTR and CONCAT functions, Drill reverses the order of precedence and allows a cast to VARCHAR from a type of higher precedence than VARCHAR, such as BIGINT.
+As shown in the table, Drill can cast a NULL value, which has the lowest precedence, to any other type; you can cast a SMALLINT (not supported in this release) value to INT. Drill might deviate from these precedence rules for performance reasons. Under certain circumstances, such as queries involving SUBSTR and CONCAT functions, Drill reverses the order of precedence and allows a cast to VARCHAR from a type of higher precedence than VARCHAR, such as BIGINT.
 
 ### Casting Precedence
 
-| Precedence | Data Type              | Precedence |    Data Type   |
-|------------|------------------------|------------|----------------|
-| 1          | INTERVALYEAR (highest) | 11         | INT            |
-| 2          | INTERVLADAY            | 12         | UINT2          |
-| 3          | TIMESTAMP              | 13         | SMALLINT*      |
-| 4          | DATE                   | 14         | UINT1          |
-| 5          | TIME                   | 15         | VAR16CHAR      |
-| 6          | DOUBLE                 | 16         | FIXED16CHAR    |
-| 7          | DECIMAL                | 17         | VARCHAR        |
-| 8          | UINT8                  | 18         | CHAR           |
-| 9          | BIGINT                 | 19         | VARBINARY      |
-| 10         | UINT4                  | 20         | FIXEDBINARY    |
-| 21         | NULL (lowest)          | 21         | NULL (lowest)  |
+| Precedence | Data Type              | Precedence | Data Type     |
+|------------|------------------------|------------|---------------|
+| 1          | INTERVALYEAR (highest) | 11         | INT           |
+| 2          | INTERVLADAY            | 12         | UINT2         |
+| 3          | TIMESTAMP              | 13         | SMALLINT*     |
+| 4          | DATE                   | 14         | UINT1         |
+| 5          | TIME                   | 15         | VAR16CHAR     |
+| 6          | DOUBLE                 | 16         | FIXED16CHAR   |
+| 7          | DECIMAL                | 17         | VARCHAR       |
+| 8          | UINT8                  | 18         | CHAR          |
+| 9          | BIGINT                 | 19         | VARBINARY     |
+| 10         | UINT4                  | 20         | FIXEDBINARY   |
+|            |                        | 21         | NULL (lowest) |
 
 \* Not supported in this release.
 
@@ -153,9 +153,9 @@ The following tables show data types that Drill can cast to/from other data type
 | DECIMAL       | yes      | yes | yes    | yes     | yes   | yes  | yes         | yes     | yes       |
 | DOUBLE        | yes      | yes | yes    | yes     | yes   | yes  | no          | yes     | no        |
 | FLOAT         | yes      | yes | yes    | yes     | yes   | yes  | no          | yes     | no        |
-| CHAR          | yes      | yes | yes    | yes     | yes   | char | yes         | yes     | yes       |
+| CHAR          | yes      | yes | yes    | yes     | yes   | no   | yes         | yes     | yes       |
 | FIXEDBINARY** | yes      | yes | yes    | yes     | yes   | no   | no          | yes     | yes       |
-| VARCHAR***    | yes      | yes | yes    | yes     | yes   | yes  | yes         | no      | no        |
+| VARCHAR***    | yes      | yes | yes    | yes     | yes   | yes  | yes         | no      | yes       |
 | VARBINARY**   | yes      | yes | yes    | yes     | yes   | no   | yes         | yes     | no        |
 
 
@@ -163,7 +163,7 @@ The following tables show data types that Drill can cast to/from other data type
 \*\* Used to cast binary UTF-8 data coming to/from sources such as MapR-DB/HBase.   
 \*\*\* You cannot convert a character string having a decimal point to an INT or BIGINT.   
 
-{% include startnote.html %}The CAST function does not support all representations of FIXEDBINARY. Only the UTF-8 format is supported. {% include endnote.html %}
+{% include startnote.html %}The CAST function does not support all representations of FIXEDBINARY and VARBINARY. Only the UTF-8 format is supported. {% include endnote.html %}
 
 If your FIXEDBINARY or VARBINARY data is in a format other than UTF-8, such as big endian, use the CONVERT_TO/FROM functions instead of CAST.
 
@@ -182,9 +182,9 @@ If your FIXEDBINARY or VARBINARY data is in a format other than UTF-8, such as b
 | INTERVALYEAR | Yes  | No   | Yes       | Yes         | No           | Yes         |
 | INTERVALDAY  | Yes  | No   | Yes       | Yes         | Yes          | No          |
 
-\* Used to cast binary UTF-8 data coming to/from sources such as MapR-DB/HBase.   
+\* Used to cast binary UTF-8 data coming to/from sources such as MapR-DB/HBase. The CAST function does not support all representations of FIXEDBINARY and VARBINARY. Only the UTF-8 format is supported. 
 
-## CONVERT_TO and CONVERT_FROM Data Types
+## CONVERT_TO and CONVERT_FROM
 
 CONVERT_TO converts data to binary from the input type. CONVERT_FROM converts data from binary to the input type. For example, the following CONVERT_TO function converts an integer in big endian format to VARBINARY:
 

http://git-wip-us.apache.org/repos/asf/drill/blob/5deca3ed/_docs/sql-reference/data-types/020-date-time-and-timestamp.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/data-types/020-date-time-and-timestamp.md b/_docs/sql-reference/data-types/020-date-time-and-timestamp.md
index dc9d17c..8683aa0 100644
--- a/_docs/sql-reference/data-types/020-date-time-and-timestamp.md
+++ b/_docs/sql-reference/data-types/020-date-time-and-timestamp.md
@@ -19,11 +19,11 @@ Using familiar date and time formats, listed in the [SQL data types table]({{ si
 
 ## INTERVALYEAR and INTERVALDAY
 
-The INTERVALYEAR AND INTERVALDAY types represent a period of time. The INTERVALYEAR type specifies values from a year to a month. The INTERVALDAY type specifies values from a day to seconds.
+The INTERVALYEAR and INTERVALDAY types represent a period of time. The INTERVALYEAR type specifies values from a year to a month. The INTERVALDAY type specifies values from a day to seconds.
 
 ### Interval in Data Source
 
-If your interval data is in the data source, for example a JSON file, cast the JSON VARCHAR types to INTERVALYEAR and INTERVALDAY using the following ISO 8601 syntax:
+If your interval data is in the data source, you need to cast the data to an SQL interval type to query the data using Drill. For example, to use interval data in a JSON file, cast the JSON data, which is of the VARCHAR type, to INTERVALYEAR and INTERVALDAY using the following ISO 8601 syntax:
 
     P [qty] Y [qty] M [qty] D T [qty] H [qty] M [qty] S
 
@@ -41,9 +41,9 @@ where:
 * M follows a number of minutes.
 * S follows a number of seconds and optional milliseconds to the right of a decimal point
 
-### Interval Literal
+### Using the Interval Literal in Input
 
-You can use INTERVAL as a keyword that introduces an interval literal that denotes a data type. With the input of interval data, use the following SQL literals to restrict the set of stored interval fields:
+When you want to use interval data in input, use INTERVAL as a keyword that introduces an interval literal that denotes a data type. With the input of interval data, use the following SQL literals to restrict the set of stored interval fields:
 
 * YEAR
 * MONTH

http://git-wip-us.apache.org/repos/asf/drill/blob/5deca3ed/_docs/sql-reference/data-types/030-handling-different-data-types.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/data-types/030-handling-different-data-types.md b/_docs/sql-reference/data-types/030-handling-different-data-types.md
index c14e795..ed1e0aa 100644
--- a/_docs/sql-reference/data-types/030-handling-different-data-types.md
+++ b/_docs/sql-reference/data-types/030-handling-different-data-types.md
@@ -6,7 +6,7 @@ parent: "Data Types"
 To query HBase data in Drill, convert every column of an HBase table to/from byte arrays from/to an SQL data type using [CONVERT_TO or CONVERT_FROM]({{ site.baseurl }}/docs//data-type-conversion/#convert_to-and-convert_from) with one exception: When converting data represented as a string to an INT or BIGINT number, use CAST. Use [CAST]({{ site.baseurl }}/docs/data-type-conversion/#cast) to convert integers to/from HBase.
 
 ## Handling Textual Data
-In a textual file, such as CSV, Drill interprets every field as a VARCHAR, as previously mentioned. In addition to using the CAST function, you can also use TO_CHAR, TO_DATE, TO_NUMBER, and TO_TIMESTAMP. If the SELECT statement includes a WHERE clause that compares a column of an unknown data type, cast both the value of the column and the comparison value in the WHERE clause.
+In a textual file, such as CSV, Drill interprets every field as a VARCHAR, as previously mentioned. In addition to using the CAST function, you can also use TO_CHAR, TO_DATE, TO_NUMBER, and TO_TIMESTAMP. If the SELECT statement includes a WHERE clause that compares a column of an unknown data type, you might need to cast both the value of the column and the comparison value in the WHERE clause. In some cases, Drill performs implicit casting and no casting is necessary on your part.
 
 ## Handling JSON and Parquet Data
 Complex and nested data structures in JSON and Parquet files are [composite types](({{site.baseurl}}/docs/supported-data-types/#composite-types)): map and array.

http://git-wip-us.apache.org/repos/asf/drill/blob/5deca3ed/_docs/sql-reference/sql-functions/020-data-type-conversion.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-functions/020-data-type-conversion.md b/_docs/sql-reference/sql-functions/020-data-type-conversion.md
index 0197f8a..6028ffa 100644
--- a/_docs/sql-reference/sql-functions/020-data-type-conversion.md
+++ b/_docs/sql-reference/sql-functions/020-data-type-conversion.md
@@ -393,13 +393,13 @@ If you have dates and times in other formats, use a data type conversion functio
 The following table lists data type formatting functions that you can
 use in your Drill queries as described in this section:
 
-**Function**| **Return Type**  
----|---  
-[TO_CHAR]({{site.baseurl}}/docs/data-type-conversion/#TO_CHAR)(expression, format)| VARCHAR  
-[TO_DATE]({{site.baseurl}}/docs/data-type-conversion/#TO_DATE)(expression, format)| DATE  
-[TO_NUMBER]({{site.baseurl}}/docs/data-type-conversion/#TO_NUMBER)(VARCHAR, format)| DECIMAL  
-[TO_TIMESTAMP]({{site.baseurl}}/docs/data-type-conversion/#TO_TIMESTAMP)(VARCHAR, format)| TIMESTAMP
-[TO_TIMESTAMP]({{site.baseurl}}/docs/data-type-conversion/#TO_TIMESTAMP)(DOUBLE)| TIMESTAMP
+| Function                                                                                         | Return Type |
+|--------------------------------------------------------------------------------------------------|-------------|
+| [TO_CHAR]({{site.baseurl}}/docs/data-type-conversion/#to_char)(expression, format)               | VARCHAR     |
+| [TO_DATE]({{site.baseurl}}/docs/data-type-conversion/#to_date)(expression, format)               | DATE        |
+| [TO_NUMBER]({{site.baseurl}}/docs/data-type-conversion/#to_number)(VARCHAR, format)              | DECIMAL     |
+| [TO_TIMESTAMP]({{site.baseurl}}/docs/data-type-conversion/#to_timestamp)(VARCHAR, format)        | TIMESTAMP   |
+| [TO_TIMESTAMP]({{site.baseurl}}/docs/data-type-conversion/#to_timestamp)(DOUBLE)                 | TIMESTAMP   |
 
 ### Format Specifiers for Numerical Conversions
 Use the following Java format specifiers for converting numbers:

http://git-wip-us.apache.org/repos/asf/drill/blob/5deca3ed/_docs/sql-reference/sql-functions/030-date-time-functions-and-arithmetic.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/sql-functions/030-date-time-functions-and-arithmetic.md b/_docs/sql-reference/sql-functions/030-date-time-functions-and-arithmetic.md
index a6df716..a7de7ea 100644
--- a/_docs/sql-reference/sql-functions/030-date-time-functions-and-arithmetic.md
+++ b/_docs/sql-reference/sql-functions/030-date-time-functions-and-arithmetic.md
@@ -74,21 +74,17 @@ Returns the sum of a date/time and a number of days/hours, or of a date/time and
 
 ### DATE_ADD Syntax
 
-    DATE_ADD(date literal_date, integer)
+    DATE_ADD(keyword literal_date, integer)
 
     DATE_ADD(keyword literal, interval expr)
 
-    DATE_ADD(column <date type>)
-
-*date* is the keyword date.  
-*literal_date* is a date in yyyy-mm-dd format enclosed in single quotation marks.  
-*integer* is a number of days to add to the date/time.  
-*column* is a date data in a data source.
-
+    DATE_ADD(column, integer)
 
 *keyword* is the word date, time, or timestamp.  
-*literal* is a date, time, or timestamp literal.  
-*interval* is a keyword  
+*literal* is a date, time, or timestamp literal.  For example, a date in yyyy-mm-dd format enclosed in single quotation marks.  
+*integer* is a number of days to add to the date/time.  
+*column* is date, time, or timestamp data in a data source column.  
+*interval* is the keyword interval.  
 *expr* is an interval expression.  
 
 ### DATE_ADD Examples
@@ -215,21 +211,17 @@ Returns the difference between a date/time and a number of days/hours, or betwee
 
 ### DATE_SUB Syntax
 
-    DATE_SUB(date literal_date, integer) 
+    DATE_SUB(keyword literal, integer) 
 
     DATE_SUB(keyword literal, interval expr)  
 
-    DATE_ADD(column <date type>)  
-
-*date* is the keyword date.  
-*literal_date* is a date in yyyy-mm-dd format enclosed in single quotation marks.  
-*integer* is a number of days to subtract from the date/time.  
-*column* is date data in a data source.
-
+    DATE_ADD(column, integer)  
 
 *keyword* is the word date, time, or timestamp.  
-*literal* is a date, time, or timestamp literal.  
-*interval* is a keyword.  
+*literal* is a date, time, or timestamp literal. For example, a date in yyyy-mm-dd format enclosed in single quotation marks.   
+*integer* is a number of days to subtract from the date, time, or timestamp.  
+*column* is date, time, or timestamp data in the data source.  
+*interval* is the keyword interval.  
 *expr* is an interval expression.
 
 ### DATE_SUB Examples


Mime
View raw message