drill-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From bridg...@apache.org
Subject [2/2] drill git commit: typo
Date Sat, 15 Aug 2015 01:02:29 GMT
typo

fix mongo sp page

fix interval stuff

fix numbering

fix numbering again

correct code blocks

minor edits

move MapR-specific stuff to MapR docs, sp instance to configuration

add support example

sys.version > VALUES

fixes to Support additions

squash 1.2 updates and fixes

squash edits

generic jar path

endian encoding fix

hex encoding fix


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/53008ee1
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/53008ee1
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/53008ee1

Branch: refs/heads/gh-pages
Commit: 53008ee1a9fc3b1d79bdf88ec02ce2926ef75018
Parents: 2345c74
Author: Kristine Hahn <khahn@maprtech.com>
Authored: Tue Aug 11 16:58:59 2015 -0700
Committer: Kristine Hahn <khahn@maprtech.com>
Committed: Fri Aug 14 17:25:56 2015 -0700

----------------------------------------------------------------------
 .../010-architecture-introduction.md            |   3 +-
 .../070-configuring-user-impersonation.md       |  13 +-
 .../075-configuring-user-authentication.md      |  14 +-
 .../020-start-up-options.md                     |   2 +-
 .../040-persistent-configuration-storage.md     |  41 +----
 .../035-plugin-configuration-basics.md          |   8 +-
 .../040-file-system-storage-plugin.md           |   4 +-
 _docs/connect-a-data-source/050-workspaces.md   |   2 +-
 .../070-hive-storage-plugin.md                  |  10 +-
 .../090-mongodb-plugin-for-apache-drill.md      | 154 ++++++++++++-------
 .../connect-a-data-source/100-mapr-db-format.md |   6 +-
 .../020-hive-to-drill-data-type-mapping.md      |  11 +-
 .../030-deploying-and-using-a-hive-udf.md       |  41 ++++-
 .../040-parquet-format.md                       |  32 ++--
 .../050-json-data-model.md                      |   2 +-
 .../060-text-files-csv-tsv-psv.md               |  62 ++++++++
 _docs/getting-started/010-drill-introduction.md |   2 +-
 _docs/getting-started/020-why-drill.md          |   2 +-
 .../020-tableau-examples.md                     |   2 +-
 .../080-configuring-jreport.md                  |  35 +----
 .../005-querying-a-file-system-introduction.md  |   2 +-
 _docs/sql-reference/090-sql-extensions.md       |  11 +-
 .../data-types/010-supported-data-types.md      |  51 +++---
 .../data-types/020-date-time-and-timestamp.md   |  26 ++--
 .../sql-commands/050-create-view.md             |   2 +-
 .../sql-reference/sql-commands/060-describe.md  |   2 +-
 .../090-show-databases-and-show-schemas.md      |   3 +-
 .../sql-commands/100-show-files.md              |   4 +-
 .../sql-commands/110-show-tables.md             |   9 +-
 _docs/sql-reference/sql-commands/120-use.md     |   9 +-
 .../005-about-sql-function-examples.md          |   2 +-
 .../sql-functions/010-math-and-trig.md          |  16 +-
 .../sql-functions/020-data-type-conversion.md   |  68 ++++----
 .../030-date-time-functions-and-arithmetic.md   |  66 ++++----
 .../sql-functions/040-string-manipulation.md    |  42 ++---
 .../040-sql-window-functions-examples.md        |   2 +-
 _docs/tutorials/020-drill-in-10-minutes.md      |   3 +-
 .../030-analyzing-the-yelp-academic-dataset.md  |   3 +-
 .../020-getting-to-know-the-drill-sandbox.md    |  27 +---
 39 files changed, 418 insertions(+), 376 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/53008ee1/_docs/architecture/010-architecture-introduction.md
----------------------------------------------------------------------
diff --git a/_docs/architecture/010-architecture-introduction.md b/_docs/architecture/010-architecture-introduction.md
index 8cb0e80..9fb5318 100755
--- a/_docs/architecture/010-architecture-introduction.md
+++ b/_docs/architecture/010-architecture-introduction.md
@@ -26,8 +26,7 @@ query execution without moving data over the network or between nodes. Drill
 uses ZooKeeper to maintain cluster membership and health-check information.
 
 Though Drill works in a Hadoop cluster environment, Drill is not tied to
-Hadoop and can run in any distributed cluster environment. The only pre-
-requisite for Drill is Zookeeper.
+Hadoop and can run in any distributed cluster environment. The only pre-requisite for Drill is Zookeeper.
 
 See Drill Query Execution.
 

http://git-wip-us.apache.org/repos/asf/drill/blob/53008ee1/_docs/configure-drill/070-configuring-user-impersonation.md
----------------------------------------------------------------------
diff --git a/_docs/configure-drill/070-configuring-user-impersonation.md b/_docs/configure-drill/070-configuring-user-impersonation.md
index 85c9209..bc7a151 100644
--- a/_docs/configure-drill/070-configuring-user-impersonation.md
+++ b/_docs/configure-drill/070-configuring-user-impersonation.md
@@ -114,16 +114,9 @@ Complete the following steps on each Drillbit node to enable user impersonation,
 
 3. Verify that enabled is set to `‘true’`.
 4. Set the maximum number of chained user hops that you want Drill to allow.
-5. (MapR cluster only) Add one of the following lines to the `drill-env.sh` file:
-   * If the underlying file system is not secure, add the following line:
-   ` export MAPR_IMPERSONATION_ENABLED=true`
-   * If the underlying file system has MapR security enabled, add the following line:
-    `export MAPR_TICKETFILE_LOCATION=/opt/mapr/conf/mapruserticket`
-6. Restart the Drillbit process on each Drill node.
-   * In a MapR cluster, run the following command:
-    `maprcli node services -name drill-bits -action restart -nodes <hostname> -f`
-   * In a non-MapR environment, run the following command:  
-     <DRILLINSTALL_HOME>/bin/drillbit.sh restart
+5. Restart the Drillbit process on each Drill node.
+
+         <DRILLINSTALL_HOME>/bin/drillbit.sh restart
 
 ## Impersonation and Chaining Example
 Frank is a senior HR manager at a company. Frank has access to all of the employee data because he is a member of the hr group. Frank created a table named “employees” in his home directory to store the employee data he uses. Only Frank has access to this table.

http://git-wip-us.apache.org/repos/asf/drill/blob/53008ee1/_docs/configure-drill/075-configuring-user-authentication.md
----------------------------------------------------------------------
diff --git a/_docs/configure-drill/075-configuring-user-authentication.md b/_docs/configure-drill/075-configuring-user-authentication.md
index eab1027..c00763d 100644
--- a/_docs/configure-drill/075-configuring-user-authentication.md
+++ b/_docs/configure-drill/075-configuring-user-authentication.md
@@ -57,13 +57,9 @@ Complete the following steps to install and configure PAM for Drill:
           }
 
 5. (Optional) To add or remove different PAM profiles, add or delete the profile names in the `“pam_profiles”` array shown above.  
-6. Restart the Drillbit process on each Drill node.
-   * In a MapR cluster, run the following command:  
-
-              maprcli node services -name drill-bits -action restart -nodes <hostname> -f
-   * In a non-MapR environment, run the following command: 
+6. Restart the Drillbit process on each Drill node. 
  
-              <DRILLINSTALL_HOME>/bin/drillbit.sh restart
+        <DRILLINSTALL_HOME>/bin/drillbit.sh restart
 
 ### Implementing and Configuring a Custom Authenticator
 
@@ -143,12 +139,8 @@ Complete the following steps to build and implement a custom authenticator:
                }
               }  
 4. Restart the Drillbit process on each Drill node.
-   * In a MapR cluster, run the following command:  
-
-              maprcli node services -name drill-bits -action restart -nodes <hostname> -f
-   * In a non-MapR environment, run the following command: 
  
-              <DRILLINSTALL_HOME>/bin/drillbit.sh restart
+        <DRILLINSTALL_HOME>/bin/drillbit.sh restart
        
 
 

http://git-wip-us.apache.org/repos/asf/drill/blob/53008ee1/_docs/configure-drill/configuration-options/020-start-up-options.md
----------------------------------------------------------------------
diff --git a/_docs/configure-drill/configuration-options/020-start-up-options.md b/_docs/configure-drill/configuration-options/020-start-up-options.md
index 7149f81..1f981fc 100644
--- a/_docs/configure-drill/configuration-options/020-start-up-options.md
+++ b/_docs/configure-drill/configuration-options/020-start-up-options.md
@@ -50,7 +50,7 @@ The summary of start-up options, also known as boot options, lists default value
 * drill.exec.buffer.size  
   Defines the amount of memory available, in terms of record batches, to hold data on the downstream side of an operation. Drill pushes data downstream as quickly as possible to make data immediately available. This requires Drill to use memory to hold the data pending operations. When data on a downstream operation is required, that data is immediately available so Drill does not have to go over the network to process it. Providing more memory to this option increases the speed at which Drill completes a query.  
 * drill.exec.sort.external.spill.directories  
-  Tells Drill which directory to use when spooling. Drill uses a spool and sort operation for beyond memory operations. The sorting operation is designed to spool to a Hadoop file system. The default Hadoop file system is a local file system in the `/tmp` directory. Spooling performance (both writing and reading back from it) is constrained by the file system. For MapR clusters, use MapReduce volumes or set up local volumes to use for spooling purposes. Volumes improve performance and stripe data across as many disks as possible.  
+  Tells Drill which directory to use when spooling. Drill uses a spool and sort operation for beyond memory operations. The sorting operation is designed to spool to a Hadoop file system. The default Hadoop file system is a local file system in the `/tmp` directory. Spooling performance (both writing and reading back from it) is constrained by the file system.  
 * drill.exec.zk.connect  
   Provides Drill with the ZooKeeper quorum to use to connect to data sources. Change this setting to point to the ZooKeeper quorum that you want Drill to use. You must configure this option on each Drillbit node.
 

http://git-wip-us.apache.org/repos/asf/drill/blob/53008ee1/_docs/configure-drill/configuration-options/040-persistent-configuration-storage.md
----------------------------------------------------------------------
diff --git a/_docs/configure-drill/configuration-options/040-persistent-configuration-storage.md b/_docs/configure-drill/configuration-options/040-persistent-configuration-storage.md
index 193d938..321a26f 100644
--- a/_docs/configure-drill/configuration-options/040-persistent-configuration-storage.md
+++ b/_docs/configure-drill/configuration-options/040-persistent-configuration-storage.md
@@ -4,7 +4,7 @@ parent: "Configuration Options"
 ---
 Drill stores persistent configuration data in a persistent configuration store
 (PStore). This data is encoded in JSON or Protobuf format. Drill can use the
-local file system or a distributed file system, such as HDFS or MapR-FS to store this data. The data
+local file system or a distributed file system, such as HDFS, to store this data. The data
 stored in a PStore includes state information for storage plugins, query
 profiles, and ALTER SYSTEM settings. The default type of PStore configured
 depends on the Drill installation mode.
@@ -12,10 +12,10 @@ depends on the Drill installation mode.
 The following table provides the persistent storage mode for each of the Drill
 modes:
 
-| Mode        | Description                                                                                                                                                             |
-|-------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| Embedded    | Drill stores persistent data in the local file system. You cannot modify the PStore location for Drill in embedded mode.                                                |
-| Distributed | Drill stores persistent data in ZooKeeper, by default. You can modify where ZooKeeper offloads data, or you can change the persistent storage mode to HBase or MapR-DB. |
+| Mode        | Description                                                                                                                                                                          |
+|-------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Embedded    | Drill stores persistent data in the local file system. You cannot modify the PStore location for Drill in embedded mode.                                                             |
+| Distributed | Drill stores persistent data in ZooKeeper, by default. You can modify where ZooKeeper offloads data, or you can change the persistent storage mode to HBase, for example.            |
   
 {% include startnote.html %}Switching between storage modes does not migrate configuration data.{% include endnote.html %}
 
@@ -43,14 +43,12 @@ Drill node and then restart the Drillbit service.
 	drill.exec: {
 	 cluster-id: "my_cluster_com-drillbits",
 	 zk.connect: "<zkhostname>:<port>",
-	 sys.store.provider.zk.blobroot: "maprfs://<directory to store pstore data>/"
+	 sys.store.provider.zk.blobroot: "hdfs://<directory to store pstore data>/"
 	}
 
-Issue the following command to restart the Drillbit on all Drill nodes:
+[Restart the Drillbit]({{site.baseurl}}/docs/starting-drill-in-distributed-mode/).
 
-    maprcli node services -name drill-bits -action restart -nodes <node IP addresses separated by a space>
-
-## HBase for Persistent Configuration Storage
+## Configuring HBase for Persistent Configuration Storage
 
 To change the persistent storage mode for Drill, add or modify the
 `sys.store.provider` block in `<drill_installation_directory>/conf/drill-
@@ -69,26 +67,3 @@ override.conf.`
 	    }
 	  },
 
-## MapR-DB for Persistent Configuration Storage
-
-If you have MapR-DB in your cluster, you can use MapR-DB for persistent
-configuration storage. Using MapR-DB to store persistent configuration data
-can prevent memory strain on ZooKeeper in clusters running heavy workloads.
-
-To change the persistent storage mode to MapR-DB, add or modify the
-`sys.store.provider` block in `<drill_installation_directory>/conf/drill-
-override.conf` on each Drill node and then restart the Drillbit service.
-
-**Example**
-
-	sys.store.provider: {
-	class: "org.apache.drill.exec.store.hbase.config.HBasePStoreProvider",
-	hbase: {
-	  table : "/tables/drill_store"
-	    }
-	},
-
-Issue the following command to restart the Drillbit on all Drill nodes:
-
-    maprcli node services -name drill-bits -action restart -nodes <node IP addresses separated by a space>
-

http://git-wip-us.apache.org/repos/asf/drill/blob/53008ee1/_docs/connect-a-data-source/035-plugin-configuration-basics.md
----------------------------------------------------------------------
diff --git a/_docs/connect-a-data-source/035-plugin-configuration-basics.md b/_docs/connect-a-data-source/035-plugin-configuration-basics.md
index 9eda181..2aa1f9f 100644
--- a/_docs/connect-a-data-source/035-plugin-configuration-basics.md
+++ b/_docs/connect-a-data-source/035-plugin-configuration-basics.md
@@ -125,7 +125,7 @@ The following table describes the attributes you configure for storage plugins i
   </tr>
 </table>
 
-\* Pertains only to distributed drill installations using the mapr-drill package.  
+\* Pertains only to distributed Drill installations using the mapr-drill package.  
 
 ## Using the Formats Attributes
 
@@ -144,7 +144,7 @@ For example, using uppercase letters in the query after defining the storage plu
 
 ## Storage Plugin REST API
 
-Drill provides a REST API that you can use to create a storage plugin configuration. Use an HTTP POST and pass two properties:
+If you need to add a storage plugin configuration to Drill and do not want to use a web browser, Drill provides a REST API that you can use to create a storage plugin configuration. Use an HTTP POST and pass two properties:
 
 * name  
   The storage plugin configuration name. 
@@ -158,9 +158,9 @@ For example, this command creates a storage plugin named myplugin for reading fi
 
 ## Bootstrapping a Storage Plugin
 
-If you need to add a storage plugin configurationto Drill and do not want to use a web browser, you can create a [bootstrap-storage-plugins.json](https://github.com/apache/drill/blob/master/contrib/storage-hbase/src/main/resources/bootstrap-storage-plugins.json) file and include it on the classpath when starting Drill. The storage plugin configuration loads when Drill starts up.
+The REST API is recommended for programmatically adding a storage plugin configuration to Drill. An alternative for use in a distributed environment only is bootstrapping. You can create a [bootstrap-storage-plugins.json](https://github.com/apache/drill/blob/master/contrib/storage-hbase/src/main/resources/bootstrap-storage-plugins.json) file and include it on the classpath when starting Drill. The storage plugin configuration loads when Drill starts up.
 
-Bootstrapping a storage plugin configuration works only when the first Drillbit in the cluster first starts up. The configuration is
+Currently, bootstrapping a storage plugin configuration works only when the first Drillbit in the cluster first starts up. The configuration is
 stored in ZooKeeper, preventing Drill from picking up the bootstrap-storage-plugins.json again.
 
 After cluster startup, you have to use the REST API or Drill Web UI to add a storage plugin configuration. Alternatively, you

http://git-wip-us.apache.org/repos/asf/drill/blob/53008ee1/_docs/connect-a-data-source/040-file-system-storage-plugin.md
----------------------------------------------------------------------
diff --git a/_docs/connect-a-data-source/040-file-system-storage-plugin.md b/_docs/connect-a-data-source/040-file-system-storage-plugin.md
index be7f595..89c00a7 100644
--- a/_docs/connect-a-data-source/040-file-system-storage-plugin.md
+++ b/_docs/connect-a-data-source/040-file-system-storage-plugin.md
@@ -87,8 +87,8 @@ workspace named `json_files`. The configuration points Drill to the
 The `connection` parameter in this configuration is "`file:///`", connecting Drill to the local file system.
 
 To query a file in the example `json_files` workspace, you can issue the `USE`
-command to tell Drill to use the `json_files` workspace configured in the `dfs`
-instance for each query that you issue:
+command to tell Drill to use the `json_files` workspace, which is included in the `dfs`
+configuration for each query that you issue:
 
     USE dfs.json_files;
     SELECT * FROM `donuts.json` WHERE type='frosted'

http://git-wip-us.apache.org/repos/asf/drill/blob/53008ee1/_docs/connect-a-data-source/050-workspaces.md
----------------------------------------------------------------------
diff --git a/_docs/connect-a-data-source/050-workspaces.md b/_docs/connect-a-data-source/050-workspaces.md
index 258e3fd..fcf279e 100644
--- a/_docs/connect-a-data-source/050-workspaces.md
+++ b/_docs/connect-a-data-source/050-workspaces.md
@@ -24,7 +24,7 @@ location of the data:
 
 You cannot include workspaces in the configurations of the
 `hive` and `hbase` plugins installed with Apache Drill, though Hive databases show up as workspaces in
-Drill. Each `hive` instance includes a `default` workspace that points to the  Hive metastore. When you query
+Drill. Each `hive` storage plugin configuration includes a `default` workspace that points to the  Hive metastore. When you query
 files and tables in the `hive default` workspaces, you can omit the
 workspace name from the query.
 

http://git-wip-us.apache.org/repos/asf/drill/blob/53008ee1/_docs/connect-a-data-source/070-hive-storage-plugin.md
----------------------------------------------------------------------
diff --git a/_docs/connect-a-data-source/070-hive-storage-plugin.md b/_docs/connect-a-data-source/070-hive-storage-plugin.md
index ac50040..bad475d 100644
--- a/_docs/connect-a-data-source/070-hive-storage-plugin.md
+++ b/_docs/connect-a-data-source/070-hive-storage-plugin.md
@@ -22,7 +22,7 @@ To register a remote Hive metastore with Drill:
 1. Issue the following command to start the Hive metastore service on the system specified in the `hive.metastore.uris`:
    `hive --service metastore`
 2. In the [Drill Web UI]({{ site.baseurl }}/docs/plugin-configuration-basics/#using-the-drill-web-ui), select the **Storage** tab.
-3. In the list of disabled storage plugins in the Drill Web UI, click **Update** next to the `hive` instance. For example:
+3. In the list of disabled storage plugins in the Drill Web UI, click **Update** next to `hive`. The Hive storage plugin configuration appears:
 
         {
           "type": "hive",
@@ -44,11 +44,7 @@ To register a remote Hive metastore with Drill:
 5. Change the default location of files to suit your environment; for example, change `"fs.default.name"` property from `"file:///"` to one of these locations:
    * `hdfs://`
    * `hdfs://<hostname>:<port>`
-6. If you are running Drill and Hive in a secure MapR cluster, remove the following line from the configuration:  
-   `"hive.metastore.sasl.enabled" : "false"`
-7. Click **Enable**.  
-8. If you are running Drill and Hive in a secure MapR cluster, add the following line to `<DRILL_HOME>/conf/drill-env.sh` on each Drill node and then [restart the Drillbit service]({{site.baseurl}}/docs/starting-drill-in-distributed-mode/):  
-   `export DRILL_JAVA_OPTS="$DRILL_JAVA_OPTS -Dmapr_sec_enabled=true -Dhadoop.login=maprsasl -Dzookeeper.saslprovider=com.mapr.security.maprsasl.MaprSaslProvider -Dmapr.library.flatclass"`
+6. Click **Enable**.  
 
 After configuring a Hive storage plugin, you can [query Hive tables]({{ site.baseurl }}/docs/querying-hive/).
 
@@ -63,7 +59,7 @@ To configure an embedded Hive metastore, complete the following
 steps:
 
 1. In the [Drill Web UI]({{ site.baseurl }}/docs/plugin-configuration-basics/#using-the-drill-web-ui), and select the **Storage** tab.
-2. In the disabled storage plugins section, click **Update** next to `hive` instance.
+2. In the disabled storage plugin configurations section, click **Update** next to `hive`.
 3. In the configuration window, add the database configuration settings.
 
     **Example**

http://git-wip-us.apache.org/repos/asf/drill/blob/53008ee1/_docs/connect-a-data-source/090-mongodb-plugin-for-apache-drill.md
----------------------------------------------------------------------
diff --git a/_docs/connect-a-data-source/090-mongodb-plugin-for-apache-drill.md b/_docs/connect-a-data-source/090-mongodb-plugin-for-apache-drill.md
index 7e439e2..e3f60c9 100644
--- a/_docs/connect-a-data-source/090-mongodb-plugin-for-apache-drill.md
+++ b/_docs/connect-a-data-source/090-mongodb-plugin-for-apache-drill.md
@@ -17,7 +17,10 @@ To query MongoDB with Drill, you install Drill and MongoDB, and then you import
 
   1. [Install Drill]({{ site.baseurl }}/docs/installing-drill-in-embedded-mode), if you do not already have it installed.
   2. [Install MongoDB](http://docs.mongodb.org/manual/installation), if you do not already have it installed.
-  3. [Import the MongoDB zip code sample data set](http://docs.mongodb.org/manual/tutorial/aggregation-zip-code-data-set). You can use Mongo Import to get the data. 
+  3. [Import the MongoDB zip code sample data set](http://docs.mongodb.org/manual/tutorial/aggregation-zip-code-data-set).   * Copy the `zips.json` content into a file and save it.  
+     * Create `/data/db` if it doesn't already exist.
+     * Make sure you have permissions to access the directories. 
+     * Use Mongo Import to import `zips.json`. 
 
 ## Configuring MongoDB
 
@@ -37,70 +40,113 @@ Drill must be running in order to access the Web UI to configure a storage plugi
         }
 
      {% include startnote.html %}27017 is the default port for `mongodb` instances.{% include endnote.html %} 
-  6. Click **Enable** to enable the storage plugin, and save the configuration.
+  6. Click **Enable** to enable the storage plugin.
 
 ## Querying MongoDB
 
-In the [Drill shell]({{site.baseurl}}/docs/starting-drill-on-linux-and-mac-os-x/), you can issue the `SHOW DATABASES` command to see a list of schemas from all
-Drill data sources, including MongoDB. If you downloaded the zip codes file,
-you should see `mongo.zipdb` in the results.
-
-    0: jdbc:drill:zk=local> SHOW DATABASES;
-    +--------------------+
-    |     SCHEMA_NAME    |
-    +--------------------+
-    | dfs.default        |
-    | dfs.root           |
-    | dfs.tmp            |
-    | sys                |
-    | mongo.zipdb        |
-    | cp.default         |
-    | INFORMATION_SCHEMA |
-    +--------------------+
-
-If you want all queries that you submit to default to `mongo.zipdb`, you can issue
-the `USE` command to change schema.
+In the [Drill shell]({{site.baseurl}}/docs/starting-drill-on-linux-and-mac-os-x/), set up Drill to use the zips collection you imported into MongoDB.
+
+1. Get a list of schemas from all
+Drill data sources, including MongoDB. 
+
+        SHOW DATABASES;
+   
+        +---------------------+
+        |     SCHEMA_NAME     |
+        +---------------------+
+        | INFORMATION_SCHEMA  |
+        | cp.default          |
+        | dfs.default         |
+        | dfs.root            |
+        | dfs.tmp             |
+        | mongo.local         |
+        | mongo.test          |
+        | sys                 |
+        +---------------------+
+        8 rows selected (1.385 seconds)
+    
+2. Change the schema to mongo.text.
+
+        USE mongo.test;
+
+        +-------+-----------------------------------------+
+        |  ok   |                 summary                 |
+        +-------+-----------------------------------------+
+        | true  | Default schema changed to [mongo.test]  |
+        +-------+-----------------------------------------+
+
+3. List the tables and verify that the `zips` collection appears:
+
+        SHOW TABLES;
+
+        +---------------+-----------------+
+        | TABLE_SCHEMA  |   TABLE_NAME    |
+        +---------------+-----------------+
+        | mongo.test    | system.indexes  |
+        | mongo.test    | zips            |
+        +---------------+-----------------+
+        2 rows selected (0.187 seconds)
+
+4. Set the option to read numbers as doubles instead of as text;
+
+        ALTER SYSTEM SET `store.mongo.read_numbers_as_double` = true;
+        +-------+----------------------------------------------+
+        |  ok   |                   summary                    |
+        +-------+----------------------------------------------+
+        | true  | store.mongo.read_numbers_as_double updated.  |
+        +-------+----------------------------------------------+
+        1 row selected (0.078 seconds)
+
+
 
 ### Example Queries
 
-**Example 1: View mongo.zipdb Dataset**
-
-    0: jdbc:drill:zk=local> SELECT * FROM zipcodes LIMIT 10;
-    +------------------------------------------------------------------------------------------------+
-    |                                           *                                                    |
-    +------------------------------------------------------------------------------------------------+
-    | { "city" : "AGAWAM" , "loc" : [ -72.622739 , 42.070206] , "pop" : 15338 , "state" : "MA"}      |
-    | { "city" : "CUSHMAN" , "loc" : [ -72.51565 , 42.377017] , "pop" : 36963 , "state" : "MA"}      |
-    | { "city" : "BARRE" , "loc" : [ -72.108354 , 42.409698] , "pop" : 4546 , "state" : "MA"}        |
-    | { "city" : "BELCHERTOWN" , "loc" : [ -72.410953 , 42.275103] , "pop" : 10579 , "state" : "MA"} |
-    | { "city" : "BLANDFORD" , "loc" : [ -72.936114 , 42.182949] , "pop" : 1240 , "state" : "MA"}    |
-    | { "city" : "BRIMFIELD" , "loc" : [ -72.188455 , 42.116543] , "pop" : 3706 , "state" : "MA"}    |
-    | { "city" : "CHESTER" , "loc" : [ -72.988761 , 42.279421] , "pop" : 1688 , "state" : "MA"}      |
-    | { "city" : "CHESTERFIELD" , "loc" : [ -72.833309 , 42.38167] , "pop" : 177 , "state" : "MA"}   |
-    | { "city" : "CHICOPEE" , "loc" : [ -72.607962 , 42.162046] , "pop" : 23396 , "state" : "MA"}    |
-    | { "city" : "CHICOPEE" , "loc" : [ -72.576142 , 42.176443] , "pop" : 31495 , "state" : "MA"}    |
+**Example 1: View the zips Collection**
+
+    SELECT * FROM zips LIMIT 10;
+
+    +---------------+-------------------------+--------+--------+
+    |     city      |           loc           |  pop   | state  |
+    +---------------+-------------------------+--------+--------+
+    | AGAWAM        | [-72.622739,42.070206]  | 15338  | MA     |
+    | CUSHMAN       | [-72.51565,42.377017]   | 36963  | MA     |
+    | BELCHERTOWN   | [-72.410953,42.275103]  | 10579  | MA     |
+    | BLANDFORD     | [-72.936114,42.182949]  | 1240   | MA     |
+    | BRIMFIELD     | [-72.188455,42.116543]  | 3706   | MA     |
+    | CHESTERFIELD  | [-72.833309,42.38167]   | 177    | MA     |
+    | BARRE         | [-72.108354,42.409698]  | 4546   | MA     |
+    | CHICOPEE      | [-72.607962,42.162046]  | 23396  | MA     |
+    | CHICOPEE      | [-72.576142,42.176443]  | 31495  | MA     |
+    | CHESTER       | [-72.988761,42.279421]  | 1688   | MA     |
+    +---------------+-------------------------+--------+--------+
+    10 rows selected (0.444 seconds)
+
 
 **Example 2: Aggregation**
 
-    0: jdbc:drill:zk=local> select state,city,avg(pop)
-    +------------+------------+------------+
-    |   state    |    city    |   EXPR$2   |
-    +------------+------------+------------+
-    | MA         | AGAWAM     | 15338.0    |
-    | MA         | CUSHMAN    | 36963.0    |
-    | MA         | BARRE      | 4546.0     |
-    | MA         | BELCHERTOWN | 10579.0   |
-    | MA         | BLANDFORD  | 1240.0     |
-    | MA         | BRIMFIELD  | 3706.0     |
-    | MA         | CHESTER    | 1688.0     |
-    | MA         | CHESTERFIELD | 177.0    |
-    | MA         | CHICOPEE   | 27445.5    |
-    | MA         | WESTOVER AFB | 1764.0   |
-    +------------+------------+------------+
+```
+SELECT city, avg(pop) FROM zips GROUP BY city LIMIT 10; 
+
++---------------+---------------------+
+|     city      |       EXPR$1        |
++---------------+---------------------+
+| AGAWAM        | 15338.0             |
+| CUSHMAN       | 18649.5             |
+| BELCHERTOWN   | 10579.0             |
+| BLANDFORD     | 1240.0              |
+| BRIMFIELD     | 2441.5              |
+| CHESTERFIELD  | 9988.857142857143   |
+| BARRE         | 9770.0              |
+| CHICOPEE      | 27445.5             |
+| CHESTER       | 7285.0952380952385  |
+| WESTOVER AFB  | 1764.0              |
++---------------+---------------------+
+10 rows selected (1.664 seconds)
+```
 
 **Example 3: Nested Data Column Array**
 
-    0: jdbc:drill:zk=local> SELECT loc FROM zipcodes LIMIT 10;
+    0: jdbc:drill:zk=local> SELECT loc FROM zips LIMIT 10;
     +------------------------+
     |    loc                 |
     +------------------------+
@@ -116,7 +162,7 @@ the `USE` command to change schema.
     | [-72.576142,42.176443] |
     +------------------------+
         
-    0: jdbc:drill:zk=local> SELECT loc[0] FROM zipcodes LIMIT 10;
+    0: jdbc:drill:zk=local> SELECT loc[0] FROM zips LIMIT 10;
     +------------+
     |   EXPR$0   |
     +------------+

http://git-wip-us.apache.org/repos/asf/drill/blob/53008ee1/_docs/connect-a-data-source/100-mapr-db-format.md
----------------------------------------------------------------------
diff --git a/_docs/connect-a-data-source/100-mapr-db-format.md b/_docs/connect-a-data-source/100-mapr-db-format.md
index 57d66c2..a5c7403 100755
--- a/_docs/connect-a-data-source/100-mapr-db-format.md
+++ b/_docs/connect-a-data-source/100-mapr-db-format.md
@@ -3,7 +3,7 @@ title: "MapR-DB Format"
 parent: "Connect a Data Source"
 ---
 The MapR-DB format is not included in the Apache Drill release. Drill includes a `maprdb` format for MapR-DB that is defined within the
-default `dfs` storage plugin instance when you install Drill from the `mapr-drill` package on a MapR node. The `maprdb` format improves the
+default `dfs` storage plugin configuration when you install Drill from the `mapr-drill` package on a MapR node. The `maprdb` format improves the
 estimated number of rows that Drill uses to plan a query. It also enables you
 to query tables like you would query files in a file system because MapR-DB
 and MapR-FS share the same namespace. 
@@ -20,8 +20,8 @@ query. The userid running the query must have read permission to access the MapR
 
     SELECT * FROM mfs.`/users/max/mytable`;
 
-The following image shows a portion of the configuration with the `maprdb`
-format for the `dfs` instance:
+The following image shows a portion of the `dfs` configuration with the `maprdb`
+format:
 
 ![drill query flow]({{ site.baseurl }}/docs/img/18.png)
 

http://git-wip-us.apache.org/repos/asf/drill/blob/53008ee1/_docs/data-sources-and-file-formats/020-hive-to-drill-data-type-mapping.md
----------------------------------------------------------------------
diff --git a/_docs/data-sources-and-file-formats/020-hive-to-drill-data-type-mapping.md b/_docs/data-sources-and-file-formats/020-hive-to-drill-data-type-mapping.md
index ffda83e..251f29d 100644
--- a/_docs/data-sources-and-file-formats/020-hive-to-drill-data-type-mapping.md
+++ b/_docs/data-sources-and-file-formats/020-hive-to-drill-data-type-mapping.md
@@ -16,8 +16,7 @@ Using Drill you can read tables created in Hive that use data types compatible w
 | FLOAT              | FLOAT                   | 4-byte single precision floating point number              |
 | DOUBLE             | DOUBLE                  | 8-byte double precision floating point number              |
 | INTEGER            | INT, TINYINT, SMALLINT  | 1-, 2-, or 4-byte signed integer                           |
-| INTERVALDAY        | N/A                     | Integer fields representing a day                          |
-| INTERVALYEAR       | N/A                     | Integer fields representing a year                         |
+| INTERVAL           | N/A                     | A day-time or year-month interval                          |
 | TIME               | N/A                     | Hours minutes seconds 24-hour basis                        |
 | N/A                | TIMESTAMP               | Conventional UNIX Epoch timestamp.                         |
 | TIMESTAMP          | TIMESTAMP               | JDBC timestamp in yyyy-mm-dd hh:mm:ss format               |
@@ -36,7 +35,11 @@ Drill does not support the following Hive types:
 * TIMESTAMP (Unix Epoch format)
 * UNION
 
-The Hive version used in MapR supports the Hive timestamp in Unix Epoch format. Currently, the Apache Hive version used by Drill does not support this timestamp format. The workaround is to use the JDBC format for the timestamp, which Hive accepts and Drill uses, as shown in the following type mapping example. The timestamp value appears in the example CSV file in JDBC format: 2015-03-25 01:23:15. The Hive table defines column i in the CREATE EXTERNAL TABLE command as a timestamp column. The Drill extract function verifies that Drill interprets the timestamp correctly.
+Currently, the Apache Hive version used by Drill does not support the Hive timestamp in Unix Epoch format. The workaround is to use the JDBC format for the timestamp, which Hive accepts and Drill uses. The type mapping example shows how to use the workaround as follows. 
+
+* The timestamp value appears in the example CSV file in JDBC format: 2015-03-25 01:23:15.  
+* Workaround: The Hive table defines column i in the CREATE EXTERNAL TABLE command as a timestamp column.  
+* The Drill extract function verifies that Drill interprets the timestamp correctly.
 
 ## Type Mapping Example
 This example demonstrates the mapping of Hive data types to Drill data types. Using a CSV that has the following contents, you create a Hive table having values of different supported types:
@@ -46,7 +49,7 @@ This example demonstrates the mapping of Hive data types to Drill data types. Us
 ### Example Assumptions
 The example makes the following assumptions:
 
-* The CSV resides on the MapR file system (MapRFS) in the Drill sandbox: `/mapr/demo.mapr.com/data/`  
+* The CSV resides in the following location in the Drill sandbox: `/mapr/demo.mapr.com/data/`  
 * You [enabled the DECIMAL data type]({{site.baseurl}}/docs/supported-data-types#enabling-the-decimal-type) in Drill.  
 
 ### Define an External Table in Hive

http://git-wip-us.apache.org/repos/asf/drill/blob/53008ee1/_docs/data-sources-and-file-formats/030-deploying-and-using-a-hive-udf.md
----------------------------------------------------------------------
diff --git a/_docs/data-sources-and-file-formats/030-deploying-and-using-a-hive-udf.md b/_docs/data-sources-and-file-formats/030-deploying-and-using-a-hive-udf.md
index 2cc0db0..53100ac 100644
--- a/_docs/data-sources-and-file-formats/030-deploying-and-using-a-hive-udf.md
+++ b/_docs/data-sources-and-file-formats/030-deploying-and-using-a-hive-udf.md
@@ -5,15 +5,35 @@ parent: "Data Sources and File Formats"
 If the extensive Hive functions, such as the mathematical and date functions, which Drill supports do not meet your needs, you can use a Hive UDF in Drill queries. Drill supports your existing Hive scalar UDFs. You can do queries on Hive tables and access existing Hive input/output formats, including custom serdes. Drill serves as a complement to Hive deployments by offering low latency queries.
 
 ## Creating the UDF
-You create the JAR for a UDF to use in Drill in a conventional manner with a few caveats, using a unique name and creating a Drill resource, covered in this section.
+You create the JAR for a UDF to use in Drill in a conventional manner with a few caveats, using a unique name and creating a Drill resource, covered in this section. Sample code for this function is in [Github](https://github.com/viadea/HiveUDF).
 
 1. Use a unique name for the Hive UDF to avoid conflicts with Drill custom functions of the same name.
+
+        @Description(
+                name = "my_upper",
+                value = "_FUNC_(str) - Converts a string to uppercase",
+                extended = "Example:\n" +
+                "  > SELECT my_upper(a) FROM test;\n" +
+                "  ABC"
+                )
+
 2. Create a custom Hive UDF using either of these APIs:  
    * Simple API: org.apache.hadoop.hive.ql.exec.UDF
    * Complex API: org.apache.hadoop.hive.ql.udf.generic.GenericUDF
-3. Create an empty `drill-module.conf` in the resources directory in the Java project. 
+3. Create an empty `drill-module.conf` in the resources directory in the Java project.  
+
+        # ls -altr src/main/resources/drill-module.conf
+        -rw-r--r-- 1 root root 0 Aug 12 23:16 src/main/resources/drill-module.conf
+
 4. Export the logic to a JAR, including the `drill-module.conf` file in resources.
 
+5. Make sure the drill-module.conf is in the JAR.
+
+        # jar tf target/MyUDF-1.0.0.jar  |grep -i drill
+        drill-module.conf
+
+6. Test the UDF in Hive as shown in the [Github readme](https://github.com/viadea/HiveUDF#c-test-udf).
+
 The `drill-module.conf` file defines [startup options]({{ site.baseurl }}/docs/start-up-options/) and makes the JAR functions available to use in queries throughout the Hadoop cluster. After exporting the UDF logic to a JAR file, set up the UDF in Drill. Drill users can access the custom UDF for use in Hive queries.
 
 ## Setting Up a UDF
@@ -21,16 +41,23 @@ After you export the custom UDF as a JAR, perform the UDF setup tasks so Drill c
  
 To set up the UDF:
 
-1. Register Hive. [Register a Hive storage plugin]({{ site.baseurl }}/docs/hive-storage-plugin/) that connects Drill to a Hive data source.
-2. Add the JAR for the UDF to the Drill CLASSPATH. In earlier versions of Drill, place the JAR file in the `/jars/3rdparty` directory of the Drill installation on all nodes running a Drillbit.
-3. On each Drill node in the cluster, restart the Drillbit.
+1. Enable the default [Hive storage plugin configuration]({{ site.baseurl }}/docs/hive-storage-plugin/) that connects Drill to a Hive data source.  
+2. Add the JAR for the UDF in the `/jars/3rdparty` directory of the Drill installation on all nodes running a Drillbit.  
+    `clush -a cp /xxx/target/MyUDF-1.0.0.jar /xxx/drill-1.1.0/jars/3rdparty/`  
+3. On each Drill node in the cluster, restart the Drillbit.  
    `<drill installation directory>/bin/drillbit.sh restart`
  
 ## Using a UDF
 Use a Hive UDF just as you would use a Drill custom function. For example, to query using a Hive UDF named upper-to-lower that takes a column.value argument, the SELECT statement looks something like this:  
      
-     SELECT upper-to-lower(my_column.myvalue) FROM mytable;
-     
+    SELECT MY_UPPER('abc') from (VALUES(1));
+    +---------+
+    | EXPR$0  |
+    +---------+
+    | ABC     |
+    +---------+
+    1 row selected (1.516 seconds)
+
 
 
 

http://git-wip-us.apache.org/repos/asf/drill/blob/53008ee1/_docs/data-sources-and-file-formats/040-parquet-format.md
----------------------------------------------------------------------
diff --git a/_docs/data-sources-and-file-formats/040-parquet-format.md b/_docs/data-sources-and-file-formats/040-parquet-format.md
index 867c4f4..449e138 100644
--- a/_docs/data-sources-and-file-formats/040-parquet-format.md
+++ b/_docs/data-sources-and-file-formats/040-parquet-format.md
@@ -147,22 +147,22 @@ The first table in this section maps SQL data types to Parquet data types, limit
 ### SQL Types to Parquet Logical Types
 Parquet also supports logical types, fully described on the [Apache Parquet site](https://github.com/Parquet/parquet-format/blob/master/LogicalTypes.md). Embedded types, JSON and BSON, annotate a binary primitive type representing a JSON or BSON document. The logical types and their mapping to SQL types are:
  
-| SQL Type                     | Drill Description                                                              | Parquet Logical Type | Parquet Description                                                                                                                        |
-|------------------------------|--------------------------------------------------------------------------------|----------------------|--------------------------------------------------------------------------------------------------------------------------------------------|
-| DATE                         | Years months and days in the form in the form YYYY-­MM-­DD                     | DATE                 | Date, not including time of day. Uses the int32 annotation. Stores the number of days from the Unix epoch, 1 January 1970.                 |
-| VARCHAR                      | Character string variable length                                               | UTF8 (Strings)       | Annotates the binary primitive type. The byte array is interpreted as a UTF-8 encoded character string.                                    |
-| None                         |                                                                                | INT_8                | 8 bits, signed                                                                                                                             |
-| None                         |                                                                                | INT_16               | 16 bits, usigned                                                                                                                           |
-| INT                          | 4-byte signed integer                                                          | INT_32               | 32 bits, signed                                                                                                                            |
-| DOUBLE                       | 8-byte double precision floating point number                                  | INT_64               | 64 bits, signed                                                                                                                            |
-| None                         |                                                                                | UINT_8               | 8 bits, unsigned                                                                                                                           |
-| None                         |                                                                                | UINT_16              | 16 bits, unsigned                                                                                                                          |
-| None                         |                                                                                | UINT_32              | 32 bits, unsigned                                                                                                                          |
-| None                         |                                                                                | UINT_64              | 64 bits, unsigned                                                                                                                          |
-| DECIMAL*                     | 38-digit precision                                                             | DECIMAL              | Arbitrary-precision signed decimal numbers of the form unscaledValue * 10^(-scale)                                                         |
-| TIME                         | Hours, minutes, seconds, milliseconds; 24-hour basis                           | TIME_MILLIS          | Logical time, not including the date. Annotates int32. Number of milliseconds after midnight.                                              |
-| TIMESTAMP                    | Year, month, day, and seconds                                                  | TIMESTAMP_MILLIS     | Logical date and time. Annotates an int64 that stores the number of milliseconds from the Unix epoch, 00:00:00.000 on 1 January 1970, UTC. |
-| INTERVALDAY and INTERVALYEAR | Integer fields representing a period of time depending on the type of interval | INTERVAL             | An interval of time. Annotates a fixed_len_byte_array of length 12. Months, days, and ms in unsigned little-endian format.                 |
+| SQL Type   | Drill Description                                                              | Parquet Logical Type | Parquet Description                                                                                                                        |
+|------------|--------------------------------------------------------------------------------|----------------------|--------------------------------------------------------------------------------------------------------------------------------------------|
+| DATE       | Years months and days in the form in the form YYYY-­MM-­DD                       | DATE                 | Date, not including time of day. Uses the int32 annotation. Stores the number of days from the Unix epoch, 1 January 1970.                 |
+| VARCHAR    | Character string variable length                                               | UTF8 (Strings)       | Annotates the binary primitive type. The byte array is interpreted as a UTF-8 encoded character string.                                    |
+| None       |                                                                                | INT_8                | 8 bits, signed                                                                                                                             |
+| None       |                                                                                | INT_16               | 16 bits, usigned                                                                                                                           |
+| INT        | 4-byte signed integer                                                          | INT_32               | 32 bits, signed                                                                                                                            |
+| DOUBLE     | 8-byte double precision floating point number                                  | INT_64               | 64 bits, signed                                                                                                                            |
+| None       |                                                                                | UINT_8               | 8 bits, unsigned                                                                                                                           |
+| None       |                                                                                | UINT_16              | 16 bits, unsigned                                                                                                                          |
+| None       |                                                                                | UINT_32              | 32 bits, unsigned                                                                                                                          |
+| None       |                                                                                | UINT_64              | 64 bits, unsigned                                                                                                                          |
+| DECIMAL*   | 38-digit precision                                                             | DECIMAL              | Arbitrary-precision signed decimal numbers of the form unscaledValue * 10^(-scale)                                                         |
+| TIME       | Hours, minutes, seconds, milliseconds; 24-hour basis                           | TIME_MILLIS          | Logical time, not including the date. Annotates int32. Number of milliseconds after midnight.                                              |
+| TIMESTAMP  | Year, month, day, and seconds                                                  | TIMESTAMP_MILLIS     | Logical date and time. Annotates an int64 that stores the number of milliseconds from the Unix epoch, 00:00:00.000 on 1 January 1970, UTC. |
+| INTERVAL   | Integer fields representing a period of time depending on the type of interval | INTERVAL             | An interval of time. Annotates a fixed_len_byte_array of length 12. Months, days, and ms in unsigned little-endian encoding.                 |
 
 \* In this release, Drill disables the DECIMAL data type, including casting to DECIMAL and reading DECIMAL types from Parquet and Hive. To enable the DECIMAL type, set the `planner.enable_decimal_data_type` option to `true`.
 

http://git-wip-us.apache.org/repos/asf/drill/blob/53008ee1/_docs/data-sources-and-file-formats/050-json-data-model.md
----------------------------------------------------------------------
diff --git a/_docs/data-sources-and-file-formats/050-json-data-model.md b/_docs/data-sources-and-file-formats/050-json-data-model.md
index b8f2872..eb3bf86 100644
--- a/_docs/data-sources-and-file-formats/050-json-data-model.md
+++ b/_docs/data-sources-and-file-formats/050-json-data-model.md
@@ -424,7 +424,7 @@ After removing the extraneous square brackets in the coordinates array, you can
 ### Lengthy JSON objects
 Currently, Drill cannot manage lengthy JSON objects, such as a gigabit JSON file. Finding the beginning and end of records can be time consuming and require scanning the whole file.
 
-Workaround: Use a tool to split the JSON file into smaller chunks of 64-128MB or 64-256MB initially until you know the total data size and node configuration. Keep the JSON objects intact in each file. A distributed file system, such as MapR-FS, is recommended over trying to manage file partitions.
+Workaround: Use a tool to split the JSON file into smaller chunks of 64-128MB or 64-256MB initially until you know the total data size and node configuration. Keep the JSON objects intact in each file. A distributed file system, such as HDFS, is recommended over trying to manage file partitions.
 
 ### Complex JSON objects
 Complex arrays and maps can be difficult or impossible to query.

http://git-wip-us.apache.org/repos/asf/drill/blob/53008ee1/_docs/data-sources-and-file-formats/060-text-files-csv-tsv-psv.md
----------------------------------------------------------------------
diff --git a/_docs/data-sources-and-file-formats/060-text-files-csv-tsv-psv.md b/_docs/data-sources-and-file-formats/060-text-files-csv-tsv-psv.md
index 4d34954..678c270 100644
--- a/_docs/data-sources-and-file-formats/060-text-files-csv-tsv-psv.md
+++ b/_docs/data-sources-and-file-formats/060-text-files-csv-tsv-psv.md
@@ -165,4 +165,66 @@ You can use a different extension for files with and without a header, and use a
       "delimiter": ","
     },
 
+## Converting a CSV file to Apache Parquet
 
+A common use case when working with Hadoop is to store and query text files, such as CSV and TSV. To get better performance and efficient storage, you convert these files into Parquet. You can use code to achieve this, as you can see in the [ConvertUtils](https://github.com/Parquet/parquet-compatibility/blob/master/parquet-compat/src/test/java/parquet/compat/test/ConvertUtils.java) sample/test class. A simpler way to convert these text files to Parquet is to query the text files using Drill, and save the result to Parquet files.
+
+### How to Convert CSV to Parquet
+
+This example uses the [Passenger Dataset](http://media.flysfo.com/media/sfo/media/air-traffic/Passenger_4.zip) from SFO Air Traffic Statistics.
+
+1. Execute a basic query:
+
+        SELECT * 
+        FROM dfs.`/opendata/Passenger/SFO_Passenger_Data/MonthlyPassengerData_200507_to_201503.csv`
+        LIMIT 5;
+
+        ["200507","ATA Airlines","TZ","ATA Airlines","TZ","Domestic","US","Deplaned","Low Fare","Terminal 1","B","27271\r"]
+        ...
+        ...
+
+   By default Drill processes each line as an array of columns, all values being a simple string. To do some operations with these values (projection or conditional query) you must convert the strings to proper types. 
+
+2. Use the column index, and cast the value to the proper type. 
+
+        SELECT 
+        columns[0] as `DATE`,
+        columns[1] as `AIRLINE`,
+        CAST(columns[11] AS DOUBLE) as `PASSENGER_COUNT`
+        FROM dfs.`/opendata/Passenger/SFO_Passenger_Data/*.csv`
+        WHERE CAST(columns[11] AS DOUBLE) < 5
+        ;
+
+        +---------+-----------------------------------+------------------+
+        |  DATE   |              AIRLINE              | PASSENGER_COUNT  |
+        +---------+-----------------------------------+------------------+
+        | 200610  | United Airlines - Pre 07/01/2013  | 2.0              |
+        ...
+        ...
+
+3. Create Parquet files.
+
+        ALTER SESSION SET `store.format`='parquet';
+
+
+        CREATE TABLE dfs.tmp.`/stats/airport_data/` AS
+        SELECT
+        CAST(SUBSTR(columns[0],1,4) AS INT)  `YEAR`,
+        CAST(SUBSTR(columns[0],5,2) AS INT) `MONTH`,
+        columns[1] as `AIRLINE`,
+        columns[2] as `IATA_CODE`,
+        columns[3] as `AIRLINE_2`,
+        columns[4] as `IATA_CODE_2`,
+        columns[5] as `GEO_SUMMARY`,
+        columns[6] as `GEO_REGION`,
+        columns[7] as `ACTIVITY_CODE`,
+        columns[8] as `PRICE_CODE`,
+        columns[9] as `TERMINAL`,
+        columns[10] as `BOARDING_AREA`,
+        CAST(columns[11] AS DOUBLE) as `PASSENGER_COUNT`
+        FROM dfs.`/opendata/Passenger/SFO_Passenger_Data/*.csv`
+
+4. Use the Parquet file in any of your Hadoop processes, or use Drill to query the file as follows:
+
+        SELECT *
+        FROM dfs.tmp.`/stats/airport_data/*`

http://git-wip-us.apache.org/repos/asf/drill/blob/53008ee1/_docs/getting-started/010-drill-introduction.md
----------------------------------------------------------------------
diff --git a/_docs/getting-started/010-drill-introduction.md b/_docs/getting-started/010-drill-introduction.md
index 0a3538b..f72dc93 100644
--- a/_docs/getting-started/010-drill-introduction.md
+++ b/_docs/getting-started/010-drill-introduction.md
@@ -36,7 +36,7 @@ In this release, Drill disables the DECIMAL data type, including casting to DECI
 Key features of Apache Drill are:
 
   * Low-latency SQL queries
-  * Dynamic queries on self-describing data in files (such as JSON, Parquet, text) and MapR-DB/HBase tables, without requiring metadata definitions in the Hive metastore.
+  * Dynamic queries on self-describing data in files (such as JSON, Parquet, text) and HBase tables, without requiring metadata definitions in the Hive metastore.
   * ANSI SQL
   * Nested data support
   * Integration with Apache Hive (queries on Hive tables and views, support for all Hive file formats and Hive UDFs)

http://git-wip-us.apache.org/repos/asf/drill/blob/53008ee1/_docs/getting-started/020-why-drill.md
----------------------------------------------------------------------
diff --git a/_docs/getting-started/020-why-drill.md b/_docs/getting-started/020-why-drill.md
index de9beb3..b522844 100644
--- a/_docs/getting-started/020-why-drill.md
+++ b/_docs/getting-started/020-why-drill.md
@@ -66,7 +66,7 @@ Apache Drill lets you leverage your investments in Hive. You can run interactive
 
 
 ## 7. Access multiple data sources
-Drill is extensible. You can connect Drill out-of-the-box to file systems (local or distributed, such as S3, HDFS and MapR-FS), HBase and Hive. You can implement a storage plugin to make Drill work with any other data source. Drill can combine data from multiple data sources on the fly in a single query, with no centralized metadata definitions. Here's a query that combines data from a Hive table, an HBase table (view) and a JSON file:
+Drill is extensible. You can connect Drill out-of-the-box to file systems (local or distributed, such as S3 and HDFS), HBase and Hive. You can implement a storage plugin to make Drill work with any other data source. Drill can combine data from multiple data sources on the fly in a single query, with no centralized metadata definitions. Here's a query that combines data from a Hive table, an HBase table (view) and a JSON file:
 
     SELECT custview.membership, sum(orders.order_total) AS sales
     FROM hive.orders, custview, dfs.`clicks/clicks.json` c 

http://git-wip-us.apache.org/repos/asf/drill/blob/53008ee1/_docs/odbc-jdbc-interfaces/using-drill-with-bi-tools/020-tableau-examples.md
----------------------------------------------------------------------
diff --git a/_docs/odbc-jdbc-interfaces/using-drill-with-bi-tools/020-tableau-examples.md b/_docs/odbc-jdbc-interfaces/using-drill-with-bi-tools/020-tableau-examples.md
index 10ea9df..2c8f76c 100755
--- a/_docs/odbc-jdbc-interfaces/using-drill-with-bi-tools/020-tableau-examples.md
+++ b/_docs/odbc-jdbc-interfaces/using-drill-with-bi-tools/020-tableau-examples.md
@@ -13,7 +13,7 @@ This section includes the following examples:
   * Using custom SQL to connect to data in a Parquet file
 
 The steps and results of these examples assume pre-configured schemas and
-source data. You configure schemas as storage plugin instances on the Storage
+source data. You define schemas by configuring storage plugins on the Storage
 tab of the [Drill Web UI]({{ site.baseurl }}/docs/getting-to-know-the-drill-sandbox#storage-plugin-overview). Also, the examples assume you [enabled the DECIMAL data type]({{site.baseurl}}/docs/supported-data-types#enabling-the-decimal-type) in Drill.  
 
 ## Example: Connect to a Hive Table in Tableau

http://git-wip-us.apache.org/repos/asf/drill/blob/53008ee1/_docs/odbc-jdbc-interfaces/using-drill-with-bi-tools/080-configuring-jreport.md
----------------------------------------------------------------------
diff --git a/_docs/odbc-jdbc-interfaces/using-drill-with-bi-tools/080-configuring-jreport.md b/_docs/odbc-jdbc-interfaces/using-drill-with-bi-tools/080-configuring-jreport.md
index 1782ad1..35f147a 100644
--- a/_docs/odbc-jdbc-interfaces/using-drill-with-bi-tools/080-configuring-jreport.md
+++ b/_docs/odbc-jdbc-interfaces/using-drill-with-bi-tools/080-configuring-jreport.md
@@ -1,34 +1 @@
----
-title: "Configuring JReport with Drill"
-parent: "Using Drill with BI Tools"
----
-
-JReport is an embeddable BI solution that empowers users to analyze data and create reports and dashboards. JReport accesses data from Hadoop systems, such as the MapR Distribution through Apache Drill, as well as other big data and transactional data sources. By visualizing data through Drill, users can perform their own reporting and data discovery for agile, on-the-fly decision-making.

You can use JReport 13.1 and the the Apache Drill JDBC Driver to easily extract data from the MapR Distribution and visulaize it, creating reports and dashboards that you can embed into your own applications.
-
Complete the following simple steps to use Apache Drill with JReport:
-

1. Install the Drill JDBC Driver with JReport.
2. Create a new JReport Catalog to manage the Drill connection.
3. Use JReport Designer to query the data and create a report.

----------
-
-### Step 1: Install the Drill JDBC Driver with JReport
-
-Drill provides standard JDBC connectivity to easily integrate with JReport. JReport 13.1 requires Drill 1.0 or later.
-
For general instructions on installing the Drill JDBC driver, see [Using JDBC]({{ site.baseurl }}/docs/using-jdbc/).

1. Locate the JDBC driver in the Drill installation directory on any node where Drill is installed on the cluster: 
-
        <drill-home>/jars/jdbc-driver/drill-jdbc-all-<drill-version>.jar 
   For example:
-
        /opt/mapr/drill/drill-1.0.0/jars/jdbc-driver/drill-jdbc-all-1.0.0.jar
   
2. Copy the Drill JDBC driver into the JReport `lib` folder:
-
        %REPORTHOME%\lib\
-   For example, on Windows, copy the Drill JDBC driver jar file into:
-   
-        C:\JReport\Designer\lib\drill-jdbc-all-1.0.0.jar
    
3.	Add the location of the JAR file to the JReport CLASSPATH variable. On Windows, edit the `C:\JReport\Designer\bin\setenv.bat` file:
-
    ![drill query flow]({{ site.baseurl }}/docs/img/jreport_setenv.png)

4. Verify that the JReport system can resolve the hostnames of the ZooKeeper nodes of the Drill cluster. You can do this by configuring DNS for all of the systems. Alternatively, you can edit the hosts file on the JReport system to include the hostnames and IP addresses of all the ZooKeeper nodes used with the Drill cluster.  For Linux systems, the hosts file is located at `/etc/hosts`. For Windows systems, the hosts file is located at `%WINDIR%\system32\drivers\etc\hosts`  Here is an example of a Windows hosts file: ![drill query flow]({{ site.baseurl }}/docs/img/jreport-hostsfile.png)
-
-----------
-
-### Step 2: Create a New JReport Catalog to Manage the Drill Connection
-
-1.	Click Create **New -> Catalog…**
2.	Provide a catalog file name and click **…** to choose the file-saving location.
3.	Click **View -> Catalog Browser**.
4.	Right-click **Data Source 1** and select **Add JDBC Connection**.
5.	Fill in the **Driver**, **URL**, **User**, and **Password** fields. ![drill query flow]({{ site.baseurl }}/docs/img/jreport-catalogbrowser.png)
6.	Click **Options** and select the **Qualifier** tab. 
7.	In the **Quote Qualifier** section, choose **User Defined** and change the quote character from “ to ` (backtick). ![drill query flow]({{ site.baseurl }}/docs/img/jreport-quotequalifier.png)
8.	Click **OK**. JReport will verify the connection and save all information.
9.	Add tables and views to the JReport catalog by right-clicking the connection node and choosing **Add Table**. Now you can browse the schemas and add specific tables that you want to make available for building queries. ![drill query flow]({{ site.baseurl }}/docs/img/jreport-addtable.png
 )
10.	Click **Done** when you have added all the tables you need. 
-
-
-### Step 3: Use JReport Designer
-
-1.	In the Catalog Browser, right-click **Queries** and select **Add Query…**
2.	Define a JReport query by using the Query Editor. You can also import your own SQL statements. ![drill query flow]({{ site.baseurl }}/docs/img/jreport-queryeditor.png)
3.	Click **OK** to close the Query Editor, and click the **Save Catalog** button to save your progress to the catalog file. 
-
    **Note**: If the report returns errors, you may need to edit the query and add the schema in front of the table name: `select column from schema.table_name` You can do this by clicking the **SQL** button on the Query Editor.

5.  Use JReport Designer to query the data and create a report. ![drill query flow]({{ site.baseurl }}/docs/img/jreport-crosstab.png)
-
    ![drill query flow]({{ site.baseurl }}/docs/img/jreport-crosstab2.png)
-
    ![drill query flow]({{ site.baseurl }}/docs/img/jreport-crosstab3.png)
\ No newline at end of file
+---
title: "Configuring JReport with Drill"
parent: "Using Drill with BI Tools"
---

JReport is an embeddable BI solution that empowers users to analyze data and create reports and dashboards. JReport accesses data from Hadoop systems through Apache Drill. By visualizing data through Drill, users can perform their own reporting and data discovery for agile, on-the-fly decision-making.

You can use JReport 13.1 and the Apache Drill JDBC Driver to easily extract data and visualize it, creating reports and dashboards that you can embed into your own applications. Complete the following simple steps to use Apache Drill with JReport:

1. Install the Drill JDBC Driver with JReport.
2. Create a new JReport Catalog to manage the Drill connection.
3. Use JReport Designer to query the data and create a report.

----------

### Step 1: Install the Drill JDBC Driver with JReport

Drill provides standard JDBC connectivity to integrate with JReport. JReport 13.1 requires Drill 1.0 or later.
For g
 eneral instructions on installing the Drill JDBC driver, see [Using JDBC]({{ site.baseurl }}/docs/using-the-jdbc-driver/).

1. Locate the JDBC driver in the Drill installation directory on any node where Drill is installed on the cluster: 
        <drill-home>/jars/jdbc-driver/drill-jdbc-all-<drill-version>.jar 
   
2. Copy the Drill JDBC driver into the JReport `lib` folder:
        %REPORTHOME%\lib\
   For example, on Windows, copy the Drill JDBC driver jar file into:
   
        C:\JReport\Designer\lib\drill-jdbc-all-1.0.0.jar
    
3.	Add the location of the JAR file to the JReport CLASSPATH variable. On Windows, edit the `C:\JReport\Designer\bin\setenv.bat` file:
    ![drill query flow]({{ site.baseurl }}/docs/img/jreport_setenv.png)

4. Verify that the JReport system can resolve the hostnames of the ZooKeeper nodes of the Drill cluster. You can do this by configuring DNS for all of the systems. Alternatively, you can edit the hosts file on the JReport system to include the host
 names and IP addresses of all the ZooKeeper nodes used with the Drill cluster.  For Linux systems, the hosts file is located at `/etc/hosts`. For Windows systems, the hosts file is located at `%WINDIR%\system32\drivers\etc\hosts`  Here is an example of a Windows hosts file: ![drill query flow]({{ site.baseurl }}/docs/img/jreport-hostsfile.png)

----------

### Step 2: Create a New JReport Catalog to Manage the Drill Connection

1.	Click Create **New -> Catalog…**
2.	Provide a catalog file name and click **…** to choose the file-saving location.
3.	Click **View -> Catalog Browser**.
4.	Right-click **Data Source 1** and select **Add JDBC Connection**.
5.	Fill in the **Driver**, **URL**, **User**, and **Password** fields. ![drill query flow]({{ site.baseurl }}/docs/img/jreport-catalogbrowser.png)
6.	Click **Options** and select the **Qualifier** tab. 
7.	In the **Quote Qualifier** section, choose **User Defined** and change the quote character from “ to ` (backtick). ![drill quer
 y flow]({{ site.baseurl }}/docs/img/jreport-quotequalifier.png)
8.	Click **OK**. JReport will verify the connection and save all information.
9.	Add tables and views to the JReport catalog by right-clicking the connection node and choosing **Add Table**. Now you can browse the schemas and add specific tables that you want to make available for building queries. ![drill query flow]({{ site.baseurl }}/docs/img/jreport-addtable.png)
10.	Click **Done** when you have added all the tables you need. 


### Step 3: Use JReport Designer

1.	In the Catalog Browser, right-click **Queries** and select **Add Query…**
2.	Define a JReport query by using the Query Editor. You can also import your own SQL statements. ![drill query flow]({{ site.baseurl }}/docs/img/jreport-queryeditor.png)
3.	Click **OK** to close the Query Editor, and click the **Save Catalog** button to save your progress to the catalog file. 
    **Note**: If the report returns errors, you may need to edit the query and add the 
 schema in front of the table name: `select column from schema.table_name` You can do this by clicking the **SQL** button on the Query Editor.

4.  Use JReport Designer to query the data and create a report. ![drill query flow]({{ site.baseurl }}/docs/img/jreport-crosstab.png)
    ![drill query flow]({{ site.baseurl }}/docs/img/jreport-crosstab2.png)
    ![drill query flow]({{ site.baseurl }}/docs/img/jreport-crosstab3.png)
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/53008ee1/_docs/query-data/query-a-file-system/005-querying-a-file-system-introduction.md
----------------------------------------------------------------------
diff --git a/_docs/query-data/query-a-file-system/005-querying-a-file-system-introduction.md b/_docs/query-data/query-a-file-system/005-querying-a-file-system-introduction.md
index 6d204ca..2e58f54 100644
--- a/_docs/query-data/query-a-file-system/005-querying-a-file-system-introduction.md
+++ b/_docs/query-data/query-a-file-system/005-querying-a-file-system-introduction.md
@@ -13,7 +13,7 @@ distributed file system:
 
        SELECT * FROM hdfs.logs.`AppServerLogs/20104/Jan/01/part0001.txt`;
 
-The default `dfs` storage plugin instance registered with Drill has a
+The default `dfs` storage plugin configuration registered with Drill has a
 `default` workspace. If you query data in the `default` workspace, you do not
 need to include the workspace in the query. Refer to
 [Workspaces]({{ site.baseurl }}/docs/workspaces) for

http://git-wip-us.apache.org/repos/asf/drill/blob/53008ee1/_docs/sql-reference/090-sql-extensions.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/090-sql-extensions.md b/_docs/sql-reference/090-sql-extensions.md
index 4896abd..90cfed7 100644
--- a/_docs/sql-reference/090-sql-extensions.md
+++ b/_docs/sql-reference/090-sql-extensions.md
@@ -44,8 +44,9 @@ The [`sys` tables](/docs/querying-system-tables/) provide port, version, and opt
     +------------+
 
     SELECT commit_id FROM sys.version;
-    +------------+
-    | commit_id  |
-    +------------+
-    | e3ab2c1760ad34bda80141e2c3108f7eda7c9104 |
-
+    +-------------------------------------------+
+    |                 commit_id                 |
+    +-------------------------------------------+
+    | e3fc7e97bfe712dc09d43a8a055a5135c96b7344  |
+    +-------------------------------------------+
+    1 row selected (0.105 seconds)
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/53008ee1/_docs/sql-reference/data-types/010-supported-data-types.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/data-types/010-supported-data-types.md b/_docs/sql-reference/data-types/010-supported-data-types.md
index d21f133..1c1d13b 100644
--- a/_docs/sql-reference/data-types/010-supported-data-types.md
+++ b/_docs/sql-reference/data-types/010-supported-data-types.md
@@ -2,28 +2,29 @@
 title: "Supported Data Types"
 parent: "Data Types"
 ---
-Drill reads from and writes to data sources having a wide variety of types. Drill uses data types at the RPC level that are not supported for query input, such as INTERVALDAY and INTERVALYEAR types, often implicitly casting data. Drill supports the following SQL data types for query input:
-
-| SQL Data Type                                     | Description                                                                                                          | Example                                                                        |
-|---------------------------------------------------|----------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------|
-| BIGINT                                            | 8-byte signed integer in the range -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807                           | 9223372036854775807                                                            |
-| BINARY                                            | Variable-length byte string                                                                                          | B@e6d9eb7                                                                      |
-| BOOLEAN                                           | True or false                                                                                                        | true                                                                           |
-| DATE                                              | Years, months, and days in YYYY-MM-DD format since 4713 BC                                                           | 2015-12-30                                                                     |
-| DECIMAL(p,s), or DEC(p,s), NUMERIC(p,s)*          | 38-digit precision number, precision is p, and scale is s                                                            | DECIMAL(6,2) is 1234.56,  4 digits before and 2 digits after the decimal point |
-| FLOAT                                             | 4-byte floating point number                                                                                         | 0.456                                                                          |
-| DOUBLE, DOUBLE PRECISION                          | 8-byte floating point number, precision-scalable                                                                     | 0.456                                                                          |
-| INTEGER or INT                                    | 4-byte signed integer in the range -2,147,483,648 to 2,147,483,647                                                   | 2147483646                                                                     |
-| INTERVAL                                          | A period of time in days, hours, minutes, and seconds only (INTERVALDAY) or in years and months (INTERVALYEAR)       | '1 10:20:30.123' (INTERVALDAY) or '1-2' year to month (INTERVALYEAR)           |
-| SMALLINT**                                        | 2-byte signed integer in the range -32,768 to 32,767                                                                 | 32000                                                                          |
-| TIME                                              | 24-hour based time before or after January 1, 2001 in hours, minutes, seconds format: HH:mm:ss                       | 22:55:55.23                                                                    |
-| TIMESTAMP                                         | JDBC timestamp in year, month, date hour, minute, second, and optional milliseconds format: yyyy-MM-dd HH:mm:ss.SSS  | 2015-12-30 22:55:55.23                                                         |
-| CHARACTER VARYING, CHARACTER, CHAR,*** or VARCHAR | UTF8-encoded variable-length string. The default limit is 1 character. The maximum character limit is 2,147,483,647. | CHAR(30) casts data to a 30-character string maximum.                          |
+Drill reads from and writes to data sources having a wide variety of types. 
+
+| SQL Data Type                                        | Description                                                                                                            | Example                                                                        |
+|------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------|
+| BIGINT                                               | 8-byte signed integer in the range -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807                             | 9223372036854775807                                                            |
+| BINARY                                               | Variable-length byte string                                                                                            | B@e6d9eb7                                                                      |
+| BOOLEAN                                              | True or false                                                                                                          | true                                                                           |
+| DATE                                                 | Years, months, and days in YYYY-MM-DD format since 4713 BC                                                             | 2015-12-30                                                                     |
+| DECIMAL(p,s), or DEC(p,s), NUMERIC(p,s)*             | 38-digit precision number, precision is p, and scale is s                                                              | DECIMAL(6,2) is 1234.56,  4 digits before and 2 digits after the decimal point |
+| FLOAT                                                | 4-byte floating point number                                                                                           | 0.456                                                                          |
+| DOUBLE, DOUBLE PRECISION                             | 8-byte floating point number, precision-scalable                                                                       | 0.456                                                                          |
+| INTEGER or INT                                       | 4-byte signed integer in the range -2,147,483,648 to 2,147,483,647                                                     | 2147483646                                                                     |
+| INTERVAL**                                           | A day-time or year-month interval                                                                                      | '1 10:20:30.123' (day-time) or '1-2' year to month (year-month)                |
+| SMALLINT***                                          | 2-byte signed integer in the range -32,768 to 32,767                                                                   | 32000                                                                          |
+| TIME                                                 | 24-hour based time before or after January 1, 2001 in hours, minutes, seconds format: HH:mm:ss                         | 22:55:55.23                                                                    |
+| TIMESTAMP                                            | JDBC timestamp in year, month, date hour, minute, second, and optional milliseconds format: yyyy-MM-dd HH:mm:ss.SSS    | 2015-12-30 22:55:55.23                                                         |
+| CHARACTER VARYING, CHARACTER, CHAR,**** or VARCHAR   | UTF8-encoded variable-length string. The default limit is 1 character. The maximum character limit is 2,147,483,647.   | CHAR(30) casts data to a 30-character string maximum.                          |
 
 
 \* In this release, Drill disables the DECIMAL data type (an alpha feature), including casting to DECIMAL and reading DECIMAL types from Parquet and Hive. The NUMERIC data type is an alias for the DECIMAL data type.  
-\*\* Not currently supported.  
-\*\*\* Currently, Drill supports only variable-length strings.  
+\*\* Internally, INTERVAL is represented as INTERVALDAY or INTERVALYEAR.  
+\*\*\* SMALLINT is not currently supported.  
+\*\*\*\* Currently, Drill supports only variable-length strings.  
 
 ## Enabling the DECIMAL Type
 
@@ -159,12 +160,12 @@ The following tables show data types that Drill can cast to/from other data type
 
 
 \* Not supported in this release.   
-\*\* Used to cast binary UTF-8 data coming to/from sources such as MapR-DB/HBase.   
+\*\* Used to cast binary UTF-8 data coming to/from sources such as HBase.   
 \*\*\* You cannot convert a character string having a decimal point to an INT or BIGINT.   
 
 {% include startnote.html %}The CAST function does not support all representations of FIXEDBINARY and VARBINARY. Only the UTF-8 format is supported. {% include endnote.html %}
 
-If your FIXEDBINARY or VARBINARY data is in a format other than UTF-8, such as big endian, use the CONVERT_TO/FROM functions instead of CAST.
+If your FIXEDBINARY or VARBINARY data is in a format other than UTF-8, or big-endian encoded, use the CONVERT_TO/FROM functions instead of CAST.
 
 ### Date and Time Data Types
 
@@ -181,18 +182,18 @@ If your FIXEDBINARY or VARBINARY data is in a format other than UTF-8, such as b
 | INTERVALYEAR | Yes  | No   | Yes       | No           | Yes         |
 | INTERVALDAY  | Yes  | No   | Yes       | Yes          | No          |
 
-\* Used to cast binary UTF-8 data coming to/from sources such as MapR-DB/HBase. The CAST function does not support all representations of FIXEDBINARY and VARBINARY. Only the UTF-8 format is supported. 
+\* Used to cast binary UTF-8 data coming to/from sources such as HBase. The CAST function does not support all representations of FIXEDBINARY and VARBINARY. Only the UTF-8 format is supported. 
 
 ## CONVERT_TO and CONVERT_FROM
 
-CONVERT_TO converts data to binary from the input type. CONVERT_FROM converts data from binary to the input type. For example, the following CONVERT_TO function converts an integer in big endian format to VARBINARY:
+CONVERT_TO converts data to binary from the input type. CONVERT_FROM converts data from binary to the input type. For example, the following CONVERT_TO function converts an integer encoded using big endian to VARBINARY:
 
     CONVERT_TO(mycolumn, 'INT_BE')
 
 CONVERT_FROM and CONVERT_TO methods transform a known binary representation/encoding to a Drill internal format. 
 
-We recommend storing HBase/MapR-DB data in a binary representation rather than
-a string representation. Use the \*\_BE types to store integer data types in an HBase or Mapr-DB table.  INT is a 4-byte little endian signed integer. INT_BE is a 4-byte big endian signed integer. The comparison order of \*\_BE encoded bytes is the same as the integer value itself if the bytes are unsigned or positive. Using a *_BE type facilitates scan range pruning and filter pushdown into HBase scan. 
+We recommend storing HBase data in a binary representation rather than
+a string representation. Use the \*\_BE types to store integer data types in a table such as HBase.  INT is a 4-byte integer encoded in little endian. INT_BE is a 4-byte integer encoded in big endian. The comparison order of \*\_BE encoded bytes is the same as the integer value itself if the bytes are unsigned or positive. Using a *_BE type facilitates scan range pruning and filter pushdown into HBase scan. 
 
 \*\_HADOOPV in the data type name denotes the variable length integer as defined by Hadoop libraries. Use a \*\_HADOOPV type if user data is encoded in this format by a Hadoop tool outside MapR.
 

http://git-wip-us.apache.org/repos/asf/drill/blob/53008ee1/_docs/sql-reference/data-types/020-date-time-and-timestamp.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/data-types/020-date-time-and-timestamp.md b/_docs/sql-reference/data-types/020-date-time-and-timestamp.md
index 60d997f..e0e9eb1 100644
--- a/_docs/sql-reference/data-types/020-date-time-and-timestamp.md
+++ b/_docs/sql-reference/data-types/020-date-time-and-timestamp.md
@@ -8,18 +8,18 @@ Using familiar date and time formats, listed in the [SQL data types table]({{ si
            TIME '12:23:34', 
            TIMESTAMP '2008-2-23 12:23:34.456', 
            INTERVAL '1' YEAR, INTERVAL '2' DAY, 
-           DATE_ADD(DATE '2008-2-23', INTERVAL '1 10:20:30' DAY TO SECOND) 
+           DATE_ADD(DATE '2008-2-23', INTERVAL '1 10:20:30' DAY TO SECOND), 
            DATE_ADD(DATE '2010-2-23', 1)
-    FROM sys.version LIMIT 1;
+    FROM (VALUES (1));
     +-------------+-----------+--------------------------+---------+---------+------------------------+-------------+
     |   EXPR$0    |  EXPR$1   |          EXPR$2          | EXPR$3  | EXPR$4  |         EXPR$5         |   EXPR$6    |
     +-------------+-----------+--------------------------+---------+---------+------------------------+-------------+
     | 2008-02-23  | 12:23:34  | 2008-02-23 12:23:34.456  | P1Y     | P2D     | 2008-02-24 10:20:30.0  | 2010-02-24  |
     +-------------+-----------+--------------------------+---------+---------+------------------------+-------------+
 
-## INTERVALYEAR and INTERVALDAY
+## INTERVAL
 
-The INTERVALYEAR and INTERVALDAY types represent a period of time. The INTERVALYEAR type specifies values from a year to a month. The INTERVALDAY type specifies values from a day to seconds.
+The INTERVALYEAR and INTERVALDAY internal types represent a period of time. The INTERVALYEAR type specifies values from a year to a month. The INTERVALDAY type specifies values from a day to seconds.
 
 ### Interval in Data Source
 
@@ -67,13 +67,13 @@ To cast interval data to interval types you can query from a data source such as
 
 In the following example, the INTERVAL keyword followed by 200 adds 200 years to the timestamp. The 3 in parentheses in `YEAR(3)` specifies the precision of the year interval, 3 digits in this case to support the hundreds interval.
 
-    SELECT CURRENT_TIMESTAMP + INTERVAL '200' YEAR(3) FROM sys.version;
+    SELECT CURRENT_TIMESTAMP + INTERVAL '200' YEAR(3) FROM (VALUES(1));
     +--------------------------+
     |          EXPR$0          |
     +--------------------------+
-    | 2215-05-20 14:04:25.129  |
+    | 2215-08-14 15:18:00.094  |
     +--------------------------+
-    1 row selected (0.148 seconds)
+    1 row selected (0.096 seconds)
 
 The following examples show the input and output format of INTERVALYEAR (Year, Month) and INTERVALDAY (Day, Hours, Minutes, Seconds, Milliseconds). The following SELECT statements show how to format the query input. The output shows how to format the data in the data source.
 
@@ -85,7 +85,7 @@ The following examples show the input and output format of INTERVALYEAR (Year, M
     +------------+
     1 row selected (0.054 seconds)
 
-    SELECT INTERVAL '1-2' year to month FROM sys.version;
+    SELECT INTERVAL '1-2' year to month FROM (VALUES(1));
     +------------+
     |   EXPR$0   |
     +------------+
@@ -93,7 +93,7 @@ The following examples show the input and output format of INTERVALYEAR (Year, M
     +------------+
     1 row selected (0.927 seconds)
 
-    SELECT INTERVAL '1' year FROM sys.version;
+    SELECT INTERVAL '1' year FROM (VALUES(1));
     +------------+
     |   EXPR$0   |
     +------------+
@@ -101,7 +101,7 @@ The following examples show the input and output format of INTERVALYEAR (Year, M
     +------------+
     1 row selected (0.088 seconds)
 
-    SELECT INTERVAL '13' month FROM sys.version;
+    SELECT INTERVAL '13' month FROM (VALUES(1));
     +------------+
     |   EXPR$0   |
     +------------+
@@ -122,7 +122,7 @@ Next, use the following literals in a SELECT statement.
 * `time`
 * `timestamp`
 
-        SELECT date '2010-2-15' FROM sys.version;
+        SELECT date '2010-2-15' FROM (VALUES(1));
         +------------+
         |   EXPR$0   |
         +------------+
@@ -130,7 +130,7 @@ Next, use the following literals in a SELECT statement.
         +------------+
         1 row selected (0.083 seconds)
 
-        SELECT time '15:20:30' from sys.version;
+        SELECT time '15:20:30' from (VALUES(1));
         +------------+
         |   EXPR$0   |
         +------------+
@@ -138,7 +138,7 @@ Next, use the following literals in a SELECT statement.
         +------------+
         1 row selected (0.067 seconds)
 
-        SELECT timestamp '2015-03-11 6:50:08' FROM sys.version;
+        SELECT timestamp '2015-03-11 6:50:08' FROM (VALUES(1));
         +------------+
         |   EXPR$0   |
         +------------+


Mime
View raw message