drill-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From bridg...@apache.org
Subject [01/19] drill git commit: Updates to docs for the apache Drill 1.9 release
Date Tue, 29 Nov 2016 22:42:39 GMT
Repository: drill
Updated Branches:
  refs/heads/gh-pages cfd513958 -> 426f870a4


Updates to docs for the apache Drill 1.9 release


Project: http://git-wip-us.apache.org/repos/asf/drill/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill/commit/bb638b1d
Tree: http://git-wip-us.apache.org/repos/asf/drill/tree/bb638b1d
Diff: http://git-wip-us.apache.org/repos/asf/drill/diff/bb638b1d

Branch: refs/heads/gh-pages
Commit: bb638b1d6ae03abe408998c3f38efaebe211f578
Parents: cfd5139
Author: Bridget Bevens <bbevens@maprtech.com>
Authored: Fri Nov 18 13:59:14 2016 -0800
Committer: Bridget Bevens <bbevens@maprtech.com>
Committed: Fri Nov 18 13:59:14 2016 -0800

----------------------------------------------------------------------
 _data/version.json                              |  10 +-
 ...010-develop-custom-functions-introduction.md |  44 ++---
 .../025-tutorial-develop-a-simple-function.md   |   8 +-
 .../040-adding-custom-functions-to-drill.md     |  25 ---
 ...ng-custom-functions-to-drill-introduction.md |  10 +
 ...manually-adding-custom-functions-to-drill.md |  23 +++
 .../020-dynamic-udfs.md                         | 136 +++++++++++++
 _docs/getting-started/010-drill-introduction.md |  11 +-
 .../024-aynchronous-parquet-reader.md           |  76 ++++++++
 .../026-hive-metadata-caching.md                |  50 -----
 .../026-parquet-filter-pushdown.md              |  50 +++++
 .../027-hive-metadata-caching.md                |  50 +++++
 _docs/rn/002-1.9.0-rn.md                        | 193 +++++++++++++++++++
 _docs/sql-reference/080-reserved-keywords.md    |   4 +-
 blog/_posts/2016-11-17-drill-1.9-released.md    |  26 +++
 15 files changed, 600 insertions(+), 116 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill/blob/bb638b1d/_data/version.json
----------------------------------------------------------------------
diff --git a/_data/version.json b/_data/version.json
index f476f07..e4e1549 100644
--- a/_data/version.json
+++ b/_data/version.json
@@ -1,7 +1,7 @@
 {
-  "display_version": "1.8",
-  "full_version": "1.8.0",
-  "release_date": "August 30, 2016",
-  "blog_post":"/blog/2016/08/30/drill-1.8-released",
-  "release_notes": "https://drill.apache.org/docs/apache-drill-1-8-0-release-notes/"
+  "display_version": "1.9",
+  "full_version": "1.9.0",
+  "release_date": "November 17, 2016",
+  "blog_post":"/blog/2016/11/17/drill-1.9-released",
+  "release_notes": "https://drill.apache.org/docs/apache-drill-1-9-0-release-notes/"
 }

http://git-wip-us.apache.org/repos/asf/drill/blob/bb638b1d/_docs/develop-custom-functions/010-develop-custom-functions-introduction.md
----------------------------------------------------------------------
diff --git a/_docs/develop-custom-functions/010-develop-custom-functions-introduction.md b/_docs/develop-custom-functions/010-develop-custom-functions-introduction.md
index c6d9671..199b08c 100644
--- a/_docs/develop-custom-functions/010-develop-custom-functions-introduction.md
+++ b/_docs/develop-custom-functions/010-develop-custom-functions-introduction.md
@@ -1,46 +1,30 @@
 ---
 title: "Develop Custom Functions Introduction"
-date: 2016-01-15
+date: 2016-11-18 21:59:16 UTC
 parent: "Develop Custom Functions"
 ---
-Drill provides a high performance Java API with interfaces that you can
-implement to develop simple custom functions. Custom functions
-are reusable SQL functions that you develop in Java to encapsulate code that
-processes column values during a query. Custom functions have all the performance of the Drill primitive operations. Custom functions can perform
-calculations and transformations that built-in SQL operators and functions do
-not provide. Custom functions are called from within a SQL statement, like a
-regular function, and return a single value.
+Drill provides a high performance Java API with interfaces that you can use to develop simple and aggregate custom functions. Custom functions are reusable SQL functions that you develop in Java to encapsulate code that processes column values during a query. 
 
-This section includes a [tutorial]({{site.baseurl}}/docs/tutorial-develop-a-simple-function/) for creating a simple function that is based on a github project, which you can download. 
+Custom functions are called from within a SQL statement, like a regular function, and return a single value. Custom functions perform like Drill primitive operations. They can perform calculations and transformations that built-in SQL operators and functions do not provide.  
 
 ## Simple Function
 
 A simple function operates on a single row and produces a single row as the
 output. When you include a simple function in a query, the function is called
 once for each row in the result set. Mathematical and string functions are
-examples of simple functions. 
+examples of simple functions.  
 
-## Aggregate Function
-
-The API for developing aggregate custom functions is at the alpha stage and intended for experimental use only. Aggregate functions differ from simple functions in the number of rows that
-they accept as input. An aggregate function operates on multiple input rows
-and produces a single row as output. The COUNT(), MAX(), SUM(), and AVG()
-functions are examples of aggregate functions. You can use an aggregate
-function in a query with a GROUP BY clause to produce a result set with a
-separate aggregate value for each combination of values from the GROUP BY
-clause.
+You can use the provided [tutorial]({{site.baseurl}}/docs/tutorial-develop-a-simple-function/) to create a simple function that is based on a github project, which you can download.
 
-## Process
-
-To develop custom functions that you can use in your Drill queries, you must
-complete the following tasks:
+## Aggregate Function
 
-  1. Create a Java program that implements Drill’s simple or aggregate interface.
-  2. Add the following code to the drill-module.conf in your UDF project (src/main/resources/drill-module.conf). Replace com.yourgroupidentifier.udf with the package name(s) of your UDFs.  
-           drill.classpath.scanning.packages += "com.yourgroupidentifier.udf"
+The API for developing aggregate custom functions is at the alpha stage and intended for experimental use only. Aggregate functions differ from simple functions in the number of rows that they accept as input. An aggregate function operates on multiple input rows
+and produces a single row as output.  
 
-  3. Compile the UDF and place both jar files (source + binary) in the Drill classpath on all the Drillbits.  
-  4. Ensure that DRILL_HOME/conf/drill-override.conf does not contain any information regarding UDF packages.  
-  5. Restart drill on all the Drillbits.  
+The COUNT(), MAX(), SUM(), and AVG() functions are examples of aggregate functions. You can use an aggregate function in a query with a GROUP BY clause to produce a result set with a
+separate aggregate value for each combination of values from the GROUP BY clause.
 
-The following example shows an alternative process that is simpler short-term, but involves maintainence.
+## Development Process
+To develop custom functions for Drill, create a Java program that implements Drill’s [simple]({{site.baseurl}}/docs/developing-a-simple-function/) or [aggregate]({{site.baseurl}}/docs/developing-an-aggregate-function/) interface and then add your custom function(s) to Drill.  
+  
+As of Drill 1.9, there are two methods for adding custom functions to Drill. Administrators can manually add custom functions to Drill, or users can issue the CREATE FUNCTION USING JAR command to register their customs functions. The CREATE FUNCTION USING JAR command is part of the Dynamic UDF feature which may require assistance from an administrator. 

http://git-wip-us.apache.org/repos/asf/drill/blob/bb638b1d/_docs/develop-custom-functions/025-tutorial-develop-a-simple-function.md
----------------------------------------------------------------------
diff --git a/_docs/develop-custom-functions/025-tutorial-develop-a-simple-function.md b/_docs/develop-custom-functions/025-tutorial-develop-a-simple-function.md
index 2ee2576..5493c5c 100644
--- a/_docs/develop-custom-functions/025-tutorial-develop-a-simple-function.md
+++ b/_docs/develop-custom-functions/025-tutorial-develop-a-simple-function.md
@@ -1,6 +1,6 @@
 ---
 title: "Tutorial: Develop a Simple Function"
-date:  
+date: 2016-11-18 21:59:16 UTC
 parent: "Develop Custom Functions"
 ---
 
@@ -218,9 +218,11 @@ Maven generates two JAR files:
 * The default jar with the classes and resources (drill-simple-mask-1.0.jar)  
 * A second jar with the sources (drill-simple-mask-1.0-sources.jar)
 
-Copy the JAR files to the following location:
+Add the JAR files to Drill, by copying them to the following location:
 
-`<Drill installation directory>/jars/3rdparty` 
+`<Drill installation directory>/jars/3rdparty`  
+
+**Note:** This tutorial shows the manual method for adding JAR files to Drill, however as of Drill 1.9, the Dynamic UDF feature provides a new method for users.
 
 ## Test the New Function
 

http://git-wip-us.apache.org/repos/asf/drill/blob/bb638b1d/_docs/develop-custom-functions/040-adding-custom-functions-to-drill.md
----------------------------------------------------------------------
diff --git a/_docs/develop-custom-functions/040-adding-custom-functions-to-drill.md b/_docs/develop-custom-functions/040-adding-custom-functions-to-drill.md
deleted file mode 100644
index 51f7b49..0000000
--- a/_docs/develop-custom-functions/040-adding-custom-functions-to-drill.md
+++ /dev/null
@@ -1,25 +0,0 @@
----
-title: "Adding Custom Functions to Drill"
-date: 2016-10-04 23:35:34 UTC
-parent: "Develop Custom Functions"
----
-After you develop your custom function and generate the sources and classes
-JAR files, add both JAR files to the Drill classpath, and include the name of
-the package that contains the classes to the main Drill configuration file.
-Restart the Drillbit on each node to refresh the configuration.
-
-To add a custom function to Drill, complete the following steps:
-
-  1. Add the sources JAR file and the classes JAR file for the custom function to the Drill classpath on all nodes running a Drillbit. To add the JAR files, copy them to `<drill installation directory>/jars/3rdparty`.
-  2. Your class jar file should contain a `drill-module.conf` file at its root. 
-  3. The `drill-module.conf` file should contain the packages to scan for functions
-  	`drill.classpath.scanning.packages+= "com.mydomain.drill.fn"`. Separate package names with a comma.
-	
-    **Example**
-		
-		drill.classpath.scanning.package+= "com.mydomain.drill.fn"
-  4. On each Drill node in the cluster, navigate to the Drill installation directory, and issue the following command to restart the Drillbit:
-  
-        <drill installation directory>/bin/drillbit.sh restart
-
-     Now you can issue queries with your custom functions to Drill.

http://git-wip-us.apache.org/repos/asf/drill/blob/bb638b1d/_docs/develop-custom-functions/adding-custom-functions-to-drill/009-adding-custom-functions-to-drill-introduction.md
----------------------------------------------------------------------
diff --git a/_docs/develop-custom-functions/adding-custom-functions-to-drill/009-adding-custom-functions-to-drill-introduction.md b/_docs/develop-custom-functions/adding-custom-functions-to-drill/009-adding-custom-functions-to-drill-introduction.md
new file mode 100644
index 0000000..2f5b547
--- /dev/null
+++ b/_docs/develop-custom-functions/adding-custom-functions-to-drill/009-adding-custom-functions-to-drill-introduction.md
@@ -0,0 +1,10 @@
+---
+title: "Adding Custom Functions to Drill Introduction"
+date: 2016-10-04 23:35:34 UTC
+parent: "Adding Custom Functions to Drill"
+---
+
+As of Drill 1.9, there are two methods for adding custom functions to Drill. An administrator can manually add functions to Drill, or provide users access to a staging directory where they can upload JAR files and register their UDFs using the CREATE FUNCTION USING JAR command. The CREATE FUNCTION USING JAR command is part of the Dynamic UDF feature.
+
+- For manual instructions see Manually Adding Custom Functions to Drill. 
+- For Dynamic UDF instructions, see Dynamic UDFs. 

http://git-wip-us.apache.org/repos/asf/drill/blob/bb638b1d/_docs/develop-custom-functions/adding-custom-functions-to-drill/010-manually-adding-custom-functions-to-drill.md
----------------------------------------------------------------------
diff --git a/_docs/develop-custom-functions/adding-custom-functions-to-drill/010-manually-adding-custom-functions-to-drill.md b/_docs/develop-custom-functions/adding-custom-functions-to-drill/010-manually-adding-custom-functions-to-drill.md
new file mode 100644
index 0000000..851df63
--- /dev/null
+++ b/_docs/develop-custom-functions/adding-custom-functions-to-drill/010-manually-adding-custom-functions-to-drill.md
@@ -0,0 +1,23 @@
+---
+title: "Manually Adding Custom Functions to Drill"
+date: 2016-10-04 23:35:34 UTC
+parent: "Adding Custom Functions to Drill"
+---
+
+Administrators can manually add custom functions to Drill. After the custom function is developed, generate the sources and classes JAR files. Add both JAR files to the Drill classpath on each node, and include the name of the package that contains the classes to the main Drill configuration file. Restart the drillbit on each node to refresh the configuration.
+
+To add a custom function to Drill, complete the following steps:
+
+1.	Add the sources and classes JAR file for the custom function to the Drill classpath on all drillbits by copying the files to `<drill installation directory>/jars/3rdparty`.
+2.	Include a `drill-module.conf` file in the class JAR file, at its root. 
+3.	Add the following code to `drill-module.conf` (src/main/resources/drill-module.conf), and replace `com.yourgroupidentifier.udf` with the package name(s) of your UDF(s), as shown below:
+
+             drill.classpath.scanning.packages += "com.yourgroupidentifier.udf"
+**Note:** Separate package names with a comma.
+4.	Verify that `DRILL_HOME/conf/drill-override.conf` does not contain any information regarding UDF packages. 
+5.	Issue the following command to restart Drill:  
+
+              <drill_installation_directory>/bin/drillbit.sh restart
+
+     Now, you can use the custom function(s) in Drill queries.
+

http://git-wip-us.apache.org/repos/asf/drill/blob/bb638b1d/_docs/develop-custom-functions/adding-custom-functions-to-drill/020-dynamic-udfs.md
----------------------------------------------------------------------
diff --git a/_docs/develop-custom-functions/adding-custom-functions-to-drill/020-dynamic-udfs.md b/_docs/develop-custom-functions/adding-custom-functions-to-drill/020-dynamic-udfs.md
new file mode 100644
index 0000000..e0b3178
--- /dev/null
+++ b/_docs/develop-custom-functions/adding-custom-functions-to-drill/020-dynamic-udfs.md
@@ -0,0 +1,136 @@
+---
+title: "Dynamic UDFs"
+date: 2016-11-11 23:35:34 UTC
+parent: "Adding Custom Functions to Drill"
+---
+
+Drill 1.9 introduces support for Dynamic UDFs. The Dynamic UDF feature enables users to register and unregister UDFs on their own using the CREATE FUNCTION USING JAR and DROP FUNCTION USING JAR commands.  
+
+The Dynamic UDF feature eliminates the need to restart drillbits, which can disrupt users, when administrators manually load and unload UDFs in a multi-tenant environment. Users can issue the CREATE FUNCTION USING JAR command to register manually loaded (built-in) UDFs. Also, users can migrate registered UDFs to built-in UDFs.  
+
+The Dynamic UDF feature is enabled by default. An administrator can enable or disable the feature using the ALTER SYSTEM SET command with the `exec.udf.enable_dynamic_support option`. When the feature is enabled, users must upload their UDF (source and binary) JAR files to a staging directory in the distributed file system before issuing the CREATE FUNCTION USING JAR command to register a UDF.  
+
+If users do not have write access to the staging directory, the registration attempt fails. When a user issues the CREATE FUNCTION USING JAR command to register a UDF, Drill uses specific directories while validating and registering the UDFs. ZooKeeper stores the list of UDFs and associated JAR files. Drillbits refer to this list when registering and unregistering UDFs.  
+
+##UDF Directories 
+ 
+The directories that Drill uses when registering UDFs are configured in the `drill.exec.udf` stanza of the `drill-override.conf` file. Upon startup, Drill verifies that these directories exist in the file system.  If the directories do not exist, Drill creates them. If Drill is unable to create the directories, the start-up attempt fails. An administrator can modify the directory locations in `drill-override.conf`.
+
+The configuration file contains the following default properties and directories required to use the Dynamic UDF feature:  
+
+       drill.exec.udf: {
+                 	retry-attempts: 5,  
+       directory: {
+          	   	base: ${drill.exec.zk.root}"/udf",
+                 	local: ${drill.exec.udf.directory.base}"/local",
+                    staging: ${drill.exec.udf.directory.base}"/staging",
+                 	registry: ${drill.exec.udf.directory.base}"/registry",
+                 	tmp: ${drill.exec.udf.directory.base}"/tmp"
+                 	}
+       }  
+
+The following table describes the configuration properties and UDF directories, where `drill.exec.udf.directory.base` is the relative directory used to generate all of the UDF directories (local and remote):  
+
+|       Property                                          | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             |
+|---------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| retry-attempts                                          | The   number of times that the UDF registry update can fail before Drill returns an   error. Drill checks the registry version before updating the remote function   registry to avoid overriding changes made by another user. If the registry   version has changed, Drill validates the functions among the updated registry   again. The default is 5.                                                                                                                                                                                                                                              |
+| base: ${drill.exec.zk.root}"/udf"                       | The   property used to separate the UDF directories between clusters that use the   same file system.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   |
+| local:   ${drill.exec.udf.directory.base}"/local"       | The relative path concatenated   to the Drill temporary directory to indicate the local UDF directory. The   local UDF directory is used as a temporary directory for the Dynamic UDF JAR   files. Drill cleans this directory out upon exiting.                                                                                                                                                                                                                                                                                                                                                        |
+| staging:   ${drill.exec.udf.directory.base}"/staging"   | The   location to which users copy their binary and source JAR files. This   directory must be accessible to users in order to register their UDFs. When a   UDF is registered, Drill deletes both of the JAR files (source and binary)   from this directory. If Drill fails to register the UDFs from the JAR files,   the JAR files remain here. You can change the location of this   directory.                                                                                                                                                                                                    |
+| registry:   ${drill.exec.udf.directory.base}"/registry" | The   location to which Drill copies the source and binary JAR files after   validating the UDFs. Drill copies the JAR files from the registry directory   to a local UDF directory on each drillbit. When you unregister UDFs, Drill   deletes the appropriate JAR files from the local UDF directory on each   drillbit. DO NOT delete the JAR files from the registry directory. Deleting   JAR files from the registry directory results in inconsistencies in the   Dynamic UDF registry that point to the directory where JAR files are stored.   You can change the location of this directory.  |
+| tmp:   ${drill.exec.udf.directory.base}"/tmp"           | The   location to which Drill backs up the binary and source JAR files before   starting the registration process. Drill places each binary and source file   in a unique folder in this directory. At the end of registration, Drill   deletes both JAR files from this directory. You can change the location of   this directory.                                                                                                                                                                                                                                                                    |  
+
+The following table lists optional directories that you can add:  
+
+|       Property                | Description                                                                                                                                                                                                                                                                                                                                 |
+|-------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| drill.exec.udf.directory.fs   | Changes the file system from the   default. If there are multiple drillbits in the cluster, and the default file   system is not distributed, you must include this property, and set it to a   distributed file system. For example, file:///, hdfs:///, or maprfs:///, as   shown below:          drill.exec.udf.directory.fs: "hdfs:///" |
+| drill.exec.udf.directory.root | Changes   the root directory for remote UDF directories. By default, this property is   set to the home directory of the user that started the drillbit. For example,   on Linux the location is /home/some_user. On DFS, the location is   /user/<user_name>. And, on Windows, the location is   /C:/User/<user_name>.                     |  
+
+##Security and Authentication Impact
+Currently, any user can register UDFs if they
+have access to the staging directory. Since Drill does not provide full
+authorization and authentication support, an administrator may want to disable
+the Dynamic UDF feature. See Enabling and Disabling the Dynamic UDF Feature.
+ 
+Drill moves all JAR files from the staging directory to the other UDF directories as the user that started the drillbit, not as the user that submitted the JAR files. Drill behaves this way even if impersonation is enabled.  
+
+##Before Using the Dynamic UDF Feature 
+Before users can issue the CREATE FUNCTION USING JAR or DROP FUNCTION USING JAR commands to register or unregister UDFs, an administrator should verify that the option is enabled and that the staging directory is accessible to users.  
+
+Users create a UDF using Drill’s simple or aggregate function interface. Add a `drill-module.conf` file to the root of the class JAR file. The `drill-module.conf` file should contain the packages to scan for the functions `drill.classpath.scanning.packages+= "com.mydomain.drill.fn"`. Separate package names with a comma, as shown in the following example:  
+
+       drill.classpath.scanning.package+= "com.mydomain.drill.fn"
+ 
+Once the UDF is created, copy the source and binary JAR files to the staging directory. Now, you can register your UDF using the CREATE FUNCTION USING JAR command. See Registering a UDF.  
+
+##Enabling and Disabling the Dynamic UDF Feature
+An administrator can enable or disable the Dynamic UDF feature. The feature is enabled by default.  The `exec.udf.enable_dynamic_support` option turns the Dynamic UDF feature on and off. If security is a concern, the administrator can disable the feature to prevent users from registering and unregistering UDFs.
+
+
+Use the [ALTER SYSTEM SET]({{site.baseurl}}/docs/alter-system/) command with the  `exec.udf.enable_dynamic_support` system option to turn the feature on or off.  
+
+##Registering a UDF
+Copy the UDF source and binary JAR files to the DFS staging directory and then issue the CREATE FUNCTION USING JAR command to register the UDF, as follows:   
+
+       CREATE FUNCTION USING JAR ‘<jar_name>.jar’  
+
+If you do not know the location of the staging directory or you need access to the directory, contact your administrator.
+
+When you issue the command, Drill uses the JAR file name to register the JAR name in the Dynamic UDF registry (UDF list stored in ZooKeeper) and then copies the source and binary JAR files to the local UDF directory on each drillbit.  
+
+Upon successful registration, Drill returns a message with a list of registered UDFs:  
+
+       +---------------+-----------------------------------------------------------------------------------------------------------+
+       | ok    	      |    summary        	          	                                                                   |
+       +---------------+-----------------------------------------------------------------------------------------------------------+
+       | true          | The following UDFs in jar %s have been registered: %s                           |
+       +---------------+-----------------------------------------------------------------------------------------------------------+  
+
+##Unregistering a UDF
+Issue the DROP FUNCTION USING JAR command to unregister a UDF, as follows:  
+
+       DROP FUNCTION USING JAR ‘<jar_name>.jar’  
+
+When you issue the command, Drill unregisters UDFs based on the JAR file name and removes the JAR files from the UDF directory. Drill deletes all UDFs associated with the JAR file from the UDF registry (UDF list stored in ZooKeeper), signaling drillbits to start the local unregistering process.  
+
+Drill returns a message with the list of unregistered UDFs:  
+
+       +---------------+-----------------------------------------------------------------------------------------------------------+
+       | ok    	      |    summary        	          	                                                                   |
+       +---------------+-----------------------------------------------------------------------------------------------------------+
+       | true          | The following UDFs in jar %s have been unregistered: %s                          |
+       +---------------+-----------------------------------------------------------------------------------------------------------+  
+
+##Migrating UDFs from Dynamic to Built-In  
+ 
+You can migrate UDFs registered using the Dynamic UDF feature to built-in UDFs to free up space in the UDF directories and the Dynamic UDF registry (UDF list stored in ZooKeeper). You can migrate all of the UDFs or you can migrate a portion of the UDFs. If you migrate all of the UDFs, you cannot issue the DROP FUNCTION USING JAR command to unregister the UDFs that have been migrated from dynamic to built-in.  
+
+###Migrating All Registered UDFs to Built-In UDFs
+To migrate all registered UDFs to built-in UDFs, complete the following steps:  
+
+1. Stop all drillbits in the cluster.  
+2. Move the UDF source and binary JAR files to the $DRILL_SITE/jars directory on each drillbit. (Must be included in the classpath.)
+3. Remove the remote function registry from ZooKeeper.
+4. Start all drillbits in the cluster.
+
+###Migrating Some of the Registered UDF JAR Files to Built-In UDFs
+To migrate a portion of the UDF JAR files to built-in UDFs, complete the following steps:
+
+1. Copy (not move) the JAR files from the UDF registry directory to the $DRILL_SITE/jars directory on each drillbit. (Must be included in the classpath.)
+2. Issue the DROP FUNCTION USING JAR command for each JAR file.
+3. Stop all drillbits in the cluster.
+4. Start all drillbits in the cluster.  
+
+##Limitations
+The Dynamic UDF feature has the following known limitations:  
+
+* If a user drops a UDF while a query that references the UDF is running, the query may fail. Users should verify that no queries reference a UDF prior to issuing the DROP command.
+* The DROP command only operates at the JAR level name. A user cannot unregister only one UDF from a JAR where several UDFs are present. To avoid this situation, a user can create one UDF per jar.
+* All UDF directories (remote or local) are created upon drillbit startup, even if Dynamic UDF support is disabled. Drillbit startup fails if the user who started the drillbit does not have write access to these directories.
+
+
+
+
+
+

http://git-wip-us.apache.org/repos/asf/drill/blob/bb638b1d/_docs/getting-started/010-drill-introduction.md
----------------------------------------------------------------------
diff --git a/_docs/getting-started/010-drill-introduction.md b/_docs/getting-started/010-drill-introduction.md
index fe838ad..c29ef0c 100644
--- a/_docs/getting-started/010-drill-introduction.md
+++ b/_docs/getting-started/010-drill-introduction.md
@@ -1,6 +1,6 @@
 ---
 title: "Drill Introduction"
-date: 2016-08-30 23:12:06 UTC
+date: 2016-11-18 21:59:17 UTC
 parent: "Getting Started"
 ---
 Drill is an Apache open-source SQL query engine for Big Data exploration.
@@ -10,6 +10,15 @@ applications, while still providing the familiarity and ecosystem of ANSI SQL,
 the industry-standard query language. Drill provides plug-and-play integration
 with existing Apache Hive and Apache HBase deployments.  
 
+## What's New in Apache Drill 1.9  
+
+Drill 1.9 provides the following new features:  
+
+* Asynchronous Parquet reader
+* Parquet filter pushdown
+* Dynamic UDF support
+* HTTPD format plugin   
+
 ## What's New in Apache Drill 1.8  
 
 Drill 1.8 provides the following new features and changes: 

http://git-wip-us.apache.org/repos/asf/drill/blob/bb638b1d/_docs/performance-tuning/024-aynchronous-parquet-reader.md
----------------------------------------------------------------------
diff --git a/_docs/performance-tuning/024-aynchronous-parquet-reader.md b/_docs/performance-tuning/024-aynchronous-parquet-reader.md
new file mode 100644
index 0000000..4905921
--- /dev/null
+++ b/_docs/performance-tuning/024-aynchronous-parquet-reader.md
@@ -0,0 +1,76 @@
+---
+title: "Asynchronous Parquet Reader"
+date: 2016-02-08 21:57:13 UTC
+parent: "Performance Tuning"
+---
+
+Drill 1.9 introduces an asynchronous Parquet reader option that you can enable to improve the performance of the Parquet Scan operator. The Parquet Scan operator reads Parquet data. Reading Parquet data involves scanning the disk, decompressing and decoding the data, and writing data to internal memory structures (value vectors).  
+
+When the asynchronous parquet reader option is enabled, the speed at which the Parquet reader scans, decompresses, and decodes the data increases. The scan operation uses a buffering read strategy that allows the file system to perform larger, sequential reads, which significantly improves query performance.  
+
+Typically, the Drill default settings provide the best performance for a wide variety of use cases. However, specific cases that require a high level of performance can benefit from tuning the Parquet Scan operator.  
+
+##Tuning the Parquet Scan Operator  
+The `store.parquet.reader.pagereader.async` option turns the asynchronous Parquet reader on or off. The option is turned off by default. You can use the [ALTER SESSION command]({{site.baseurl}}/docs/alter-session-command/) to enable the asynchronous Parquet reader option, as well as the options that control buffering and parallel decoding.  
+
+When the asynchronous Page reader option is enabled, the Parquet Scan operator no longer reports operator wait time. Instead, it reports additional operator metrics that you can view in the query profile in the Drill Web Console.  
+
+The `drill.exec.scan.threadpool_size` and `drill.exec.scan.decode_threadpool_size` parameters in the `drill-override.conf` file control the size of the threadpools that read and decode Parquet data when the asynchronous Parquet reader is enabled.  
+
+For more information, see the [functional specification](https://github.com/parthchandra/drill/wiki/Parquet-file-reading-performance-improvement).  
+
+The following sections provide the configuration options and details:  
+
+###Asynchronous Parquet Reader Options  
+
+The following table lists and describes the asynchronous Parquet reader options that you can enable or disable using the ALTER SESSION SET command:  
+
+|       Option                                 | Description                                                                                                                                                                                                                                                                                                                                                          | Type    | Default     |
+|----------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------|-------------|
+| store.parquet.reader.pagereader.async        | Enable the asynchronous page reader. This   pipelines the reading of data from disk for high performance.                                                                                                                                                                                                                                                            | BOOLEAN | TRUE        |
+| store.parquet.reader.pagereader.bufferedread | Enable buffered page reading. Can improve disk   scan speeds by buffering data, but increases memory usage. This option is   less useful when the number of columns increases.                                                                                                                                                                                       | BOOLEAN | TRUE        |
+| store.parquet.reader.pagereader.buffersize   | The size of the buffer (in bytes) to use if   bufferedread is true. Has no effect otherwise.                                                                                                                                                                                                                                                                         | LONG    | 4194304     |
+| store.parquet.reader.pagereader.usefadvise   | If the file system supports it, the Parquet file   reader issues an fadvise call to enable file server side sequential reading   and caching. Since many HDFS implementations do not support this and because   this may have no effect in conditions of high concurrency, the option is set   to false. Useful for benchmarks and for performance critical queries. | BOOLEAN | FALSE       |
+| store.parquet.reader.columnreader.async      | Turn on parallel decoding of column data from   Parquet to the in memory format. This increases CPU usage and is most useful   for compressed fixed width data. With increasing concurrency, this option may   cause queries to run slower and should be turned on only for performance   critical queries.                                                          | BOOLEAN | FALSE       |  
+
+###Drillbit Configuration Parameters
+The following table lists and describes the drillbit configuration parameters in `drill-override.conf` that control the size of the threadpools used by the asynchronous Parquet reader:  
+
+**Note:** You must restart the drillbit for these configuration changes to take effect.  
+
+|       Configuration Option             | Description                                                                                                                                                                                                                                                                                                                                                                                                                       | Default               |
+|----------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------|
+| drill.exec.scan.threadpool_size        | The size of the thread pool used for reading   data from disk. Currently used only by the Parquet reader. This number should   ideally be a small multiple of the number of disks on the node. The   pipelining of the scan operator is very sensitive to the scan thread pool   size. For the best performance, set the number to 1-2 times the number of   disks on the node that are available to the distributed file system. | 8                     |
+| drill.exec.scan.decode_threadpool_size | The size of the thread pool used for decoding   Parquet data.                                                                                                                                                                                                                                                                                                                                                                     | (number of cores+1)/2 |  
+
+###Operator Metrics
+When the asynchronous Parquet reader option is enabled, Drill provides the following additional operator metrics, which you can access in the query profile from the Drill Web Console:  
+
+**Note:** Time is measured in nanoseconds.   
+
+|       Metric                  | Description                                                                                                                                                                                                                                                           |
+|-------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| NUM_DICT_PAGE_LOADS           | Number of dictionary pages read.                                                                                                                                                                                                                                      |
+| NUM_DATA_PAGE_lOADS           | Number of data pages read.                                                                                                                                                                                                                                            |
+| NUM_DATA_PAGES_DECODED        | Number of data pages decoded.                                                                                                                                                                                                                                         |
+| NUM_DICT_PAGES_DECOMPRESSED   | Number of dictionary pages decompressed.                                                                                                                                                                                                                              |
+| NUM_DATA_PAGES_DECOMPRESSED   | Number of data pages decompressed.                                                                                                                                                                                                                                    |
+| TOTAL_DICT_PAGE_READ_BYTES    | Total bytes read from disk for dictionary pages.                                                                                                                                                                                                                      |
+| TOTAL_DATA_PAGE_READ_BYTES    | Total bytes read from disk for data pages.                                                                                                                                                                                                                            |
+| TOTAL_DICT_DECOMPRESSED_BYTES | Total bytes decompressed for dictionary pages.   Same as compressed bytes on disk.                                                                                                                                                                                    |
+| TOTAL_DATA_DECOMPRESSED_BYTES | Total bytes decompressed for data pages. Same as   compressed bytes on disk.                                                                                                                                                                                          |
+| TIME_DICT_PAGE_LOADS          | Time spent reading dictionary pages   from disk.                                                                                                                                                                                                                      |
+| TIME_DATA_PAGE_LOADS          | Time spent reading data pages from   disk.                                                                                                                                                                                                                            |
+| TIME_DATA_PAGE_DECODE         | Time spend decoding data pages.                                                                                                                                                                                                                                       |
+| TIME_DICT_PAGE_DECODE         | Time spent decoding dictionary pages.                                                                                                                                                                                                                                 |
+| TIME_DICT_PAGES_DECOMPRESSED  | Time spent decompressing dictionary   pages.                                                                                                                                                                                                                          |
+| TIME_DATA_PAGES_DECOMPRESSED  | Time spent decompressing data pages.                                                                                                                                                                                                                                  |
+| TIME_DISK_SCAN_WAIT           | The total time spent by the   Parquet Scan operator waiting for the data to be read from disk (completion   of an asynchronous disk read to complete). In general, if TIME_DISK_SCAN_WAIT   is high, then the query is disk bound and may benefit from faster drives. |
+| TIME_DISK_SCAN                | The time that the Parquet Scan   operator spent reading data from the disk (or more accurately, from the   filesystem). TIME_DISK_SCAN is the equivalent metric to the operator wait   time reported by the synchronous version of the Parquet reader.                |  
+
+##Limitation
+The asynchronous Parquet reader option can increase the amount of memory required to read a single column of Parquet data up to 8MB. When the data in a column is less than 8MB, the reader uses less memory. Therefore, if a Parquet file has many columns (hundreds of columns), each column should have less than 8MB of data in each column.  
+
+
+
+

http://git-wip-us.apache.org/repos/asf/drill/blob/bb638b1d/_docs/performance-tuning/026-hive-metadata-caching.md
----------------------------------------------------------------------
diff --git a/_docs/performance-tuning/026-hive-metadata-caching.md b/_docs/performance-tuning/026-hive-metadata-caching.md
deleted file mode 100644
index fd9d6c2..0000000
--- a/_docs/performance-tuning/026-hive-metadata-caching.md
+++ /dev/null
@@ -1,50 +0,0 @@
----
-title: "Hive Metadata Caching"
-date: 2016-02-02 23:56:57 UTC
-parent: "Performance Tuning"
----
-
-Drill caches Hive metadata in a Hive metastore client cache that resides in Drill instead of accessing the Hive metastore directly. During a query, Drill can access metadata faster from the cache than from the Hive metastore. By default, the Hive metastore client cache has a TTL (time to live) of 60 seconds. The TTL is how long cache entries exist before the cache reloads metadata from the Hive metastore. Drill expires an entry in the cache 60 seconds after the following events:  
-
-*  creation of the entry
-*  a read or write operation on the entry
-*  the most recent replacement of the entry value  
-
-You can modify the TTL depending on how frequently the Hive metadata is updated. If the Hive metadata is updated frequently, decrease the cache TTL value. If Hive metadata is updated infrequently, increase the cache TTL value.
-
-For example, when you run a Drill query on a Hive table, Drill refreshes the cache 60 seconds after the read on the table. If the table is updated in Hive within that 60 second window and you issue another query on the table, Drill may not be aware of the changes until the cache expires. In such a scenario where Hive metadata is changing so quickly, you could reduce the cache TTL to 2 seconds so that Drill refreshes the cache more frequently.  
-
-## Configuring the Cache  
-
-As of Drill 1.5, you can modify the Hive storage plugin to change the rate at which the cache is reloaded. You can also modify whether the cache reloads after reads and writes or just writes.  
-
-{% include startnote.html %}The configuration applies specifically to the storage plugin that you modify. If you have multiple Hive storage plugins configured in Drill, the configuration does not apply globally. You can configure a different caching policy for each Hive metastore server.{% include endnote.html %}  
-
-To configure the Hive metastore client cache in Drill, complete the following steps:  
-
-1. Start the [Drill Web Console]({{site.baseurl}}/docs/starting-the-web-console/).
-2. Select the **Storage** tab.
-3. Click **Update** next to the “hive” storage plugin.
-4. Add the following parameters:  
-
-              "hive.metastore.cache-ttl-seconds": "<value>",
-              "hive.metastore.cache-expire-after": "<value>"  
-The `cache-ttl-seconds` value can be any non-negative value, including 0, which turns caching off. The `cache-expire-after` value can be “`access`” or “`write`”. Access indicates expiry after a read or write operation, and write indicates expiry after a write operation only.
-5. **Enable** the storage plugin to save the changes.  
-
-Example:  
-
-       {
-             "type": "hive",
-             "enabled": true,
-             "configProps": {
-               "hive.metastore.uris": "",
-               "javax.jdo.option.ConnectionURL": "jdbc:derby:;databaseName=../sample-data/drill_hive_db;create=true",
-               "hive.metastore.warehouse.dir": "/tmp/drill_hive_wh",
-               "fs.default.name": "file:///",
-               "hive.metastore.sasl.enabled": "false"
-        	   "hive.metastore.cache-ttl-seconds": "2",
-               "hive.metastore.cache-expire-after": "access"
-        
-         	  }
-       	}

http://git-wip-us.apache.org/repos/asf/drill/blob/bb638b1d/_docs/performance-tuning/026-parquet-filter-pushdown.md
----------------------------------------------------------------------
diff --git a/_docs/performance-tuning/026-parquet-filter-pushdown.md b/_docs/performance-tuning/026-parquet-filter-pushdown.md
new file mode 100644
index 0000000..3155ba6
--- /dev/null
+++ b/_docs/performance-tuning/026-parquet-filter-pushdown.md
@@ -0,0 +1,50 @@
+---
+title: "Parquet Filter Pushdown"
+date: 2016-02-02 23:56:57 UTC
+parent: "Performance Tuning"
+---
+
+Drill 1.9 introduces the Parquet filter pushdown option. Parquet filter pushdown is a performance optimization that prunes extraneous data from a Parquet file to reduce the amount of data that Drill scans and reads when a query on a Parquet file contains a filter expression. Pruning data reduces the I/O, CPU, and network overhead to optimize Drill’s performance.
+ 
+Parquet filter pushdown is enabled by default. When a query contains a filter expression, you can run the EXPLAIN PLAN command to see if Drill applies Parquet filter pushdown to the query. You can enable and disable this feature using the [ALTER SYSTEM|SESSION SET]({{site.baseurl}}/docs/alter-system/) command with the `planner.store.parquet.rowgroup.filter.pushdown` option.  
+
+##How Parquet Filter Pushdown Works
+Drill applies Parquet filter pushdown during the query planning phase. The query planner in Drill performs Parquet filter pushdown by evaluating the filter expressions in the query. If no filter expression exists, the underlying scan operator reads all of the data in a Parquet file and then sends the data to operators downstream. When filter expressions exist, the planner applies each filter and prunes the data, reducing the amount of data that the scanner and Parquet reader must read.
+ 
+Parquet filter pushdown is similar to partition pruning in that it reduces the amount of data that Drill must read during runtime. Parquet filter pushdown relies on the minimum and maximum value statistics in the row group metadata of the Parquet file to filter and prune data at the row group level. Drill can use any column in a filter expression as long the column in the Parquet file contains statistics. Whereas, partition pruning requires data to be partitioned on a column. A partition is created for each unique value in the column. Partition pruning can only prune data when the filter uses the partitioned column.  
+ 
+The query planner looks at the minimum and maximum values in each row group for an intersection. If no intersection exists, the planner can prune the row group in the table. If the minimum and maximum value range is too large, Drill does not apply Parquet filter pushdown. The query planner can typically prune more data when the tables in the Parquet file are sorted by row groups.  
+
+##Using Parquet Filter Pushdown
+Currently, Parquet filter pushdown only supports filters that reference columns from a single table (local filters). Parquet filter pushdown requires the minimum and maximum values in the Parquet file metadata. All Parquet files created in Drill using the CTAS statement contain the necessary metadata. If your Parquet files were created using another tool, you may need to use Drill to read and rewrite the files using the [CTAS command]({{site.baseurl}}/docs/create-table-as-ctas-command/).
+ 
+Parquet filter pushdown works best if you presort the data. You do not have to sort the entire data set at once. You can sort a subset of the data set, sort another subset, and so on. 
+
+###Configuring Parquet Filter Pushdown  
+Use the [ALTER SYSTEM|SESSION SET]({{site.baseurl}}/docs/alter-system/) command with the Parquet filter pushdown options to enable or disable the feature, and set the number of row groups for a table.  
+
+The following table lists the Parquet filter pushdown options with their descriptions and default values:  
+
+|       Option                                               | Description                                                                                                                                                                                                                                                                                                                                                | Default   |
+|------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------|
+| "planner.store.parquet.rowgroup.filter.pushdown"           | Turns the Parquet filter pushdown feature on or   off.                                                                                                                                                                                                                                                                                                     | TRUE      |
+| "planner.store.parquet.rowgroup.filter.pushdown.threshold" | Sets the number of row groups that a table can   have. You can increase the threshold if the filter can prune many row groups.   However, if this setting is too high, the filter evaluation overhead   increases. Base this setting on the data set. Reduce this setting if the   planning time is significant, or you do not see any benefit at runtime. | 10,000    |  
+
+###Viewing the Query Plan
+Because Drill applies Parquet filter pushdown during the query planning phase, you can view the query execution plan to see if Drill pushes down the filter when a query on a Parquet file contains a filter expression.
+ 
+Run the [EXPLAIN PLAN command]({{site.baseurl}}/docs/explain-commands/) to see the execution plan for the query. See [Query Plans]({{site.baseurl}}/docs/query-plans/) for more information. 
+
+##Support 
+The following table lists the supported and unsupported clauses, operators, data types, and scenarios for Parquet filter pushdown:  
+
+|                      | Supported                                                                                                                                                                          | Not Supported                                                                                            |
+|----------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------|
+| Clauses              | WHERE,   HAVING (HAVING is supported if Drill can pass the filter through GROUP   BY.)                                                                                             |                                                                                                          |
+| Operators            | AND,   OR, IN (An IN list is converted to OR if the number in the IN list is within   a certain threshold, for example 20. If greater than the threshold, pruning   cannot occur.) | NOT,   ITEM (Drill does not push the filter past the ITEM operator, which is used   for complex fields.) |
+| Comparison Operators | <>,   <, >, <=, >=, = (Filters are of the form "column =   value".)                                                                                                                | IS [NOT] NULL                                                                                            |
+| Data Types           | INT,   BIGINT, FLOAT, DOUBLE, DATE, TIMESTAMP, TIME                                                                                                                                | CHAR,   VARCHAR columns, Hive TIMESTAMP                                                                  |
+| Function             | CAST   is supported among these four numeric types only: int, bigint, float, double                                                                                                |                                                                                                          |
+| Other                | --                                                                                                                                                                                 | Joins,   Files with multiple row groups, Enabled Native Hive reader                                      | 
+
+

http://git-wip-us.apache.org/repos/asf/drill/blob/bb638b1d/_docs/performance-tuning/027-hive-metadata-caching.md
----------------------------------------------------------------------
diff --git a/_docs/performance-tuning/027-hive-metadata-caching.md b/_docs/performance-tuning/027-hive-metadata-caching.md
new file mode 100644
index 0000000..fd9d6c2
--- /dev/null
+++ b/_docs/performance-tuning/027-hive-metadata-caching.md
@@ -0,0 +1,50 @@
+---
+title: "Hive Metadata Caching"
+date: 2016-02-02 23:56:57 UTC
+parent: "Performance Tuning"
+---
+
+Drill caches Hive metadata in a Hive metastore client cache that resides in Drill instead of accessing the Hive metastore directly. During a query, Drill can access metadata faster from the cache than from the Hive metastore. By default, the Hive metastore client cache has a TTL (time to live) of 60 seconds. The TTL is how long cache entries exist before the cache reloads metadata from the Hive metastore. Drill expires an entry in the cache 60 seconds after the following events:  
+
+*  creation of the entry
+*  a read or write operation on the entry
+*  the most recent replacement of the entry value  
+
+You can modify the TTL depending on how frequently the Hive metadata is updated. If the Hive metadata is updated frequently, decrease the cache TTL value. If Hive metadata is updated infrequently, increase the cache TTL value.
+
+For example, when you run a Drill query on a Hive table, Drill refreshes the cache 60 seconds after the read on the table. If the table is updated in Hive within that 60 second window and you issue another query on the table, Drill may not be aware of the changes until the cache expires. In such a scenario where Hive metadata is changing so quickly, you could reduce the cache TTL to 2 seconds so that Drill refreshes the cache more frequently.  
+
+## Configuring the Cache  
+
+As of Drill 1.5, you can modify the Hive storage plugin to change the rate at which the cache is reloaded. You can also modify whether the cache reloads after reads and writes or just writes.  
+
+{% include startnote.html %}The configuration applies specifically to the storage plugin that you modify. If you have multiple Hive storage plugins configured in Drill, the configuration does not apply globally. You can configure a different caching policy for each Hive metastore server.{% include endnote.html %}  
+
+To configure the Hive metastore client cache in Drill, complete the following steps:  
+
+1. Start the [Drill Web Console]({{site.baseurl}}/docs/starting-the-web-console/).
+2. Select the **Storage** tab.
+3. Click **Update** next to the “hive” storage plugin.
+4. Add the following parameters:  
+
+              "hive.metastore.cache-ttl-seconds": "<value>",
+              "hive.metastore.cache-expire-after": "<value>"  
+The `cache-ttl-seconds` value can be any non-negative value, including 0, which turns caching off. The `cache-expire-after` value can be “`access`” or “`write`”. Access indicates expiry after a read or write operation, and write indicates expiry after a write operation only.
+5. **Enable** the storage plugin to save the changes.  
+
+Example:  
+
+       {
+             "type": "hive",
+             "enabled": true,
+             "configProps": {
+               "hive.metastore.uris": "",
+               "javax.jdo.option.ConnectionURL": "jdbc:derby:;databaseName=../sample-data/drill_hive_db;create=true",
+               "hive.metastore.warehouse.dir": "/tmp/drill_hive_wh",
+               "fs.default.name": "file:///",
+               "hive.metastore.sasl.enabled": "false"
+        	   "hive.metastore.cache-ttl-seconds": "2",
+               "hive.metastore.cache-expire-after": "access"
+        
+         	  }
+       	}

http://git-wip-us.apache.org/repos/asf/drill/blob/bb638b1d/_docs/rn/002-1.9.0-rn.md
----------------------------------------------------------------------
diff --git a/_docs/rn/002-1.9.0-rn.md b/_docs/rn/002-1.9.0-rn.md
new file mode 100644
index 0000000..2519c97
--- /dev/null
+++ b/_docs/rn/002-1.9.0-rn.md
@@ -0,0 +1,193 @@
+---
+title: "Apache Drill 1.9.0 Release Notes"
+parent: "Release Notes"
+---
+
+**Release date:**  November 17, 2016
+
+Today, we're happy to announce the availability of Drill 1.9.0. You can download it [here](https://drill.apache.org/download/).
+
+## New Features
+This release of Drill provides the following new features: 
+
+- Asynchronous Parquet reader
+- Parquet filter pushdown  
+- Dynamic UDF support  
+- HTTPD format plugin  
+
+The following sections list additional bug fixes and improvements:  
+
+<h2>        Sub-task
+</h2>
+<ul>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4420'>DRILL-4420</a>] -         C client and ODBC driver should move to using the new metadata methods provided by DRILL-4385
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4452'>DRILL-4452</a>] -         Update avatica version for Drill jdbc
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4560'>DRILL-4560</a>] -         ZKClusterCoordinator does not call DrillbitStatusListener.drillbitRegistered for new bits
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4730'>DRILL-4730</a>] -         Update JDBC DatabaseMetaData implementation to use new Metadata APIs
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4835'>DRILL-4835</a>] -         Add cancel for create prepare statement (and possibly other metadata APIs)
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4968'>DRILL-4968</a>] -         Add column size information to ColumnMetadata
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4969'>DRILL-4969</a>] -         Basic implementation for displaySize
+</li>
+</ul>
+                            
+<h2>        Bug
+</h2>
+<ul>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-1996'>DRILL-1996</a>] -         C++ Client: Make Cancel API Public
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-3898'>DRILL-3898</a>] -         No space error during external sort does not cancel the query
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4203'>DRILL-4203</a>] -         Parquet File : Date is stored wrongly
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4369'>DRILL-4369</a>] -         Database driver fails to report any major or minor version information
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4370'>DRILL-4370</a>] -         DatabaseMetadata returning &lt;Properties resource apache-drill-jdbc.properties not loaded&gt;
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4525'>DRILL-4525</a>] -         Query with BETWEEN clause on Date and Timestamp values fails with Validation Error
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4542'>DRILL-4542</a>] -         if external sort fails to spill to disk, memory is leaked and wrong error message is displayed
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4618'>DRILL-4618</a>] -         random numbers generator function broken
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4763'>DRILL-4763</a>] -         Parquet file with DATE logical type produces wrong results for simple SELECT
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4767'>DRILL-4767</a>] -         Parquet reader throw IllegalArgumentException for int32 type with GZIP compression
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4769'>DRILL-4769</a>] -         forman spins query int32 data with snappy compression
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4770'>DRILL-4770</a>] -         ParquetRecordReader throws NPE querying a single int64 column file
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4823'>DRILL-4823</a>] -         Fix OOM while trying to prune partitions with reasonable data size
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4824'>DRILL-4824</a>] -         JSON with complex nested data produces incorrect output with missing fields
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4826'>DRILL-4826</a>] -         Query against INFORMATION_SCHEMA.TABLES degrades as the number of views increases
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4862'>DRILL-4862</a>] -         wrong results - use of convert_from(binary_string(key),&#39;UTF8&#39;) in filter results in wrong results
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4870'>DRILL-4870</a>] -         drill-config.sh sets JAVA_HOME incorrectly for the Mac
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4874'>DRILL-4874</a>] -         &quot;No UserGroupInformation while generating ORC splits&quot; - hive known issue in 1.2.0-mapr-1607 release.
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4877'>DRILL-4877</a>] -         max(dir0), max(dir1) query against parquet data slower by 2X
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4880'>DRILL-4880</a>] -         Support JDBC driver registration using ServiceLoader 
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4884'>DRILL-4884</a>] -         Fix IOB exception in limit n query when n is beyond 65535.
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4888'>DRILL-4888</a>] -         putIfAbsent for ZK stores is not atomic
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4894'>DRILL-4894</a>] -         Fix unit test failure in &#39;storage-hive/core&#39; module
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4905'>DRILL-4905</a>] -         Push down the LIMIT to the parquet reader scan to limit the numbers of records read
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4906'>DRILL-4906</a>] -         CASE Expression with constant generates class exception
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4911'>DRILL-4911</a>] -         SimpleParallelizer should avoid plan serialization for logging purpose when debug logging is not enabled.
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4921'>DRILL-4921</a>] -         Scripts drill_config.sh,  drillbit.sh, and drill-embedded fail when accessed via a symbolic link
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4925'>DRILL-4925</a>] -         Add types filter to getTables metadata API
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4930'>DRILL-4930</a>] -         Metadata results are not sorted
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4934'>DRILL-4934</a>] -         ServiceEngine does not use property useIP for DrillbitStartup
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4941'>DRILL-4941</a>] -         UnsupportedOperationException : CASE WHEN true or null then 1 else 0 end
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4945'>DRILL-4945</a>] -         Missing subtype information in metadata returned by prepared statement
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4950'>DRILL-4950</a>] -         Consume Spurious Empty Batches in JDBC
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4954'>DRILL-4954</a>] -         allTextMode in the MapRDB plugin always return nulls
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4964'>DRILL-4964</a>] -         Drill fails to connect to hive metastore after hive metastore is restarted unless drillbits are restarted also
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4972'>DRILL-4972</a>] -         Drillbit shuts down immediately after starting if embedded web server is disabled
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4974'>DRILL-4974</a>] -         NPE in FindPartitionConditions.analyzeCall() for &#39;holistic&#39; expressions
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4989'>DRILL-4989</a>] -         Fix TestParquetWriter.testImpalaParquetBinaryAsTimeStamp_DictChange
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4990'>DRILL-4990</a>] -         Use new HDFS API access instead of listStatus to check if users have permissions to access workspace.
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4993'>DRILL-4993</a>] -         Documentation: Wrong output displayed for convert_from() with a map
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4995'>DRILL-4995</a>] -         Allow lazy init when dynamic UDF support is disabled
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-5004'>DRILL-5004</a>] -         Parquet date correction gives null pointer exception if there is no createdBy entry in the metadata
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-5007'>DRILL-5007</a>] -         Dynamic UDF lazy-init does not work correctly in multi-node cluster
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-5009'>DRILL-5009</a>] -         Query with a simple join fails on Hive generated parquet
+</li>
+</ul>
+                        
+<h2>        Improvement
+</h2>
+<ul>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-1950'>DRILL-1950</a>] -         Implement filter pushdown for Parquet
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-3178'>DRILL-3178</a>] -         csv reader should allow newlines inside quotes 
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4309'>DRILL-4309</a>] -         Make this option store.hive.optimize_scan_with_native_readers=true default
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4653'>DRILL-4653</a>] -         Malformed JSON should not stop the entire query from progressing
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4674'>DRILL-4674</a>] -         Allow casting to boolean the same literals as in Postgre
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4752'>DRILL-4752</a>] -         Remove submit_plan script from Drill distribution
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4771'>DRILL-4771</a>] -         Drill should avoid doing the same join twice if count(distinct) exists
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4792'>DRILL-4792</a>] -         Include session options used for a query as part of the profile
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4800'>DRILL-4800</a>] -         Improve parquet reader performance
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4864'>DRILL-4864</a>] -         Add ANSI format for date/time functions
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4865'>DRILL-4865</a>] -         Add ANSI format for date/time functions
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4927'>DRILL-4927</a>] -         Add support for Null Equality Joins
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4967'>DRILL-4967</a>] -         Adding template_name to source code generated using freemarker template
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4980'>DRILL-4980</a>] -         Upgrading of the approach of parquet date correctness status detection
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4986'>DRILL-4986</a>] -         Allow users to customize the Drill log file name
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4987'>DRILL-4987</a>] -         Use ImpersonationUtil in RemoteFunctionRegistry
+</li>
+</ul>
+            
+<h2>        New Feature
+</h2>
+<ul>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-1268'>DRILL-1268</a>] -         C++ Client. Write Unit Test for Drill Client
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-3423'>DRILL-3423</a>] -         Add New HTTPD format plugin
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4714'>DRILL-4714</a>] -         Add metadata and prepared statement APIs to DrillClient&lt;-&gt;Drillbit interface
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4726'>DRILL-4726</a>] -         Dynamic UDFs support
+</li>
+</ul>
+                                                        
+<h2>        Task
+</h2>
+<ul>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4853'>DRILL-4853</a>] -         Update C++ protobuf source files
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/DRILL-4886'>DRILL-4886</a>] -         Merge maprdb format plugin source code
+</li>
+</ul>
+              
+
+    
+                
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/drill/blob/bb638b1d/_docs/sql-reference/080-reserved-keywords.md
----------------------------------------------------------------------
diff --git a/_docs/sql-reference/080-reserved-keywords.md b/_docs/sql-reference/080-reserved-keywords.md
index e631adf..288f513 100644
--- a/_docs/sql-reference/080-reserved-keywords.md
+++ b/_docs/sql-reference/080-reserved-keywords.md
@@ -1,6 +1,6 @@
 ---
 title: "Reserved Keywords"
-date: 2016-08-04 00:23:09 UTC
+date: 2016-11-18 21:59:17 UTC
 parent: "SQL Reference"
 ---
 When you use a reserved keyword in a Drill query, enclose the word in
@@ -13,5 +13,5 @@ keyword:
 The following table provides the Drill reserved keywords that require back
 ticks:
 
-<table ><tbody><tr><td valign="top" ><h1 id="ReservedKeywords-A">A</h1><p>ABS<br />ALL<br />ALLOCATE<br />ALLOW<br />ALTER<br />AND<br />ANY<br />ARE<br />ARRAY<br />AS<br />ASENSITIVE<br />ASYMMETRIC<br />AT<br />ATOMIC<br />AUTHORIZATION<br />AVG</p><h1 id="ReservedKeywords-B">B</h1><p>BEGIN<br />BETWEEN<br />BIGINT<br />BINARY<br />BIT<br />BLOB<br />BOOLEAN<br />BOTH<br />BY</p><h1 id="ReservedKeywords-C">C</h1><p>CALL<br />CALLED<br />CARDINALITY<br />CASCADED<br />CASE<br />CAST<br />CEIL<br />CEILING<br />CHAR<br />CHARACTER<br />CHARACTER_LENGTH<br />CHAR_LENGTH<br />CHECK<br />CLOB<br />CLOSE<br />COALESCE<br />COLLATE<br />COLLECT<br />COLUMN<br />COMMIT<br />CONDITION<br />CONNECT<br />CONSTRAINT<br />CONVERT<br />CORR<br />CORRESPONDING<br />COUNT<br />COVAR_POP<br />COVAR_SAMP<br />CREATE<br />CROSS<br />CUBE<br />CUME_DIST<br />CURRENT<br />CURRENT_CATALOG<br />CURRENT_DATE<br />CURRENT_DEFAULT_TRANSFORM_GROUP<br />CURRENT_PATH<br />CURRENT_ROLE<br />CURRENT_SCHEMA<br 
 />CURRENT_TIME<br />CURRENT_TIMESTAMP<br />CURRENT_TRANSFORM_GROUP_FOR_TYPE<br />CURRENT_USER<br />CURSOR<br />CYCLE</p></td><td valign="top" ><h1 id="ReservedKeywords-D">D</h1><p>DATABASES<br />DATE<br />DAY<br />DEALLOCATE<br />DEC<br />DECIMAL<br />DECLARE<br />DEFAULT<br />DEFAULT_KW<br />DELETE<br />DENSE_RANK<br />DEREF<br />DESCRIBE<br />DETERMINISTIC<br />DISALLOW<br />DISCONNECT<br />DISTINCT<br />DOUBLE<br />DROP<br />DYNAMIC</p><h1 id="ReservedKeywords-E">E</h1><p>EACH<br />ELEMENT<br />ELSE<br />END<br />END_EXEC<br />ESCAPE<br />EVERY<br />EXCEPT<br />EXEC<br />EXECUTE<br />EXISTS<br />EXP<br />EXPLAIN<br />EXTERNAL<br />EXTRACT</p><h1 id="ReservedKeywords-F">F</h1><p>FALSE<br />FETCH<br />FILES<br />FILTER<br />FIRST_VALUE<br />FLOAT<br />FLOOR<br />FOR<br />FOREIGN<br />FREE<br />FROM<br />FULL<br />FUNCTION<br />FUSION</p><h1 id="ReservedKeywords-G">G</h1><p>GET<br />GLOBAL<br />GRANT<br />GROUP<br />GROUPING</p><h1 id="ReservedKeywords-H">H</h1><p>HAVING<br />HOLD<b
 r />HOUR</p></td><td valign="top" ><h1 id="ReservedKeywords-I">I</h1><p>IDENTITY<br />IF<br />IMPORT<br />IN<br />INDICATOR<br />INNER<br />INOUT<br />INSENSITIVE<br />INSERT<br />INT<br />INTEGER<br />INTERSECT<br />INTERSECTION<br />INTERVAL<br />INTO<br />IS</p><h1 id="ReservedKeywords-J">J</h1><p>JOIN</p><h1 id="ReservedKeywords-L">L</h1><p>LANGUAGE<br />LARGE<br />LAST_VALUE<br />LATERAL<br />LEADING<br />LEFT<br />LIKE<br />LIMIT<br />LN<br />LOCAL<br />LOCALTIME<br />LOCALTIMESTAMP<br />LOWER</p><h1 id="ReservedKeywords-M">M</h1><p>MATCH<br />MAX<br />MEMBER<br />MERGE<br />METHOD<br />MIN<br />MINUTE<br />MOD<br />MODIFIES<br />MODULE<br />MONTH<br />MULTISET</p><h1 id="ReservedKeywords-N">N</h1><p>NATIONAL<br />NATURAL<br />NCHAR<br />NCLOB<br />NEW<br />NO<br />NONE<br />NORMALIZE<br />NOT<br />NULL<br />NULLIF<br />NUMERIC</p><h1 id="ReservedKeywords-O">O</h1><p>OCTET_LENGTH<br />OF<br />OFFSET<br />OLD<br />ON<br />ONLY<br />OPEN<br />OR<br />ORDER<br />OUT<br />OUTER<br
  />OVER<br />OVERLAPS<br />OVERLAY</p></td><td valign="top" colspan="1" ><h1 id="ReservedKeywords-P">P</h1><p>PARAMETER<br />PARTITION<br />PERCENTILE_CONT<br />PERCENTILE_DISC<br />PERCENT_RANK<br />POSITION<br />POWER<br />PRECISION<br />PREPARE<br />PRIMARY<br />PROCEDURE</p><h1 id="ReservedKeywords-R">R</h1><p>RANGE<br />RANK<br />READS<br />REAL<br />RECURSIVE<br />REF<br />REFERENCES<br />REFERENCING<br />REGR_AVGX<br />REGR_AVGY<br />REGR_COUNT<br />REGR_INTERCEPT<br />REGR_R2<br />REGR_SLOPE<br />REGR_SXX<br />REGR_SXY<br />RELEASE<br />REPLACE<br />RESULT<br />RETURN<br />RETURNS<br />REVOKE<br />RIGHT<br />ROLLBACK<br />ROLLUP<br />ROW<br />ROWS<br />ROW_NUMBER</p><h1 id="ReservedKeywords-S">S</h1><p>SAVEPOINT<br />SCHEMAS<br />SCOPE<br />SCROLL<br />SEARCH<br />SECOND<br />SELECT<br />SENSITIVE<br />SESSION_USER<br />SET<br />SHOW<br />SIMILAR<br />SMALLINT<br />SOME<br />SPECIFIC<br />SPECIFICTYPE<br />SQL<br />SQLEXCEPTION<br />SQLSTATE<br />SQLWARNING<br />SQRT<br />ST
 ART<br />STATIC<br />STDDEV_POP<br />STDDEV_SAMP<br />SUBMULTISET<br />SUBSTRING<br />SUM<br />SYMMETRIC<br />SYSTEM<br />SYSTEM_USER</p></td><td valign="top" colspan="1" ><h1 id="ReservedKeywords-T">T</h1><p>TABLE<br />TABLES<br />TABLESAMPLE<br />THEN<br />TIME<br />TIMESTAMP<br />TIMEZONE_HOUR<br />TIMEZONE_MINUTE<br />TINYINT<br />TO<br />TRAILING<br />TRANSLATE<br />TRANSLATION<br />TREAT<br />TRIGGER<br />TRIM<br />TRUE</p><h1 id="ReservedKeywords-U">U</h1><p>UESCAPE<br />UNION<br />UNIQUE<br />UNKNOWN<br />UNNEST<br />UPDATE<br />UPPER<br />USE<br />USER<br />USING</p><h1 id="ReservedKeywords-V">V</h1><p>VALUE<br />VALUES<br />VARBINARY<br />VARCHAR<br />VARYING<br />VAR_POP<br />VAR_SAMP</p><h1 id="ReservedKeywords-W">W</h1><p>WHEN<br />WHENEVER<br />WHERE<br />WIDTH_BUCKET<br />WINDOW<br />WITH<br />WITHIN<br />WITHOUT</p><h1 id="ReservedKeywords-Y">Y</h1><p>YEAR</p></td></tr></tbody></table></div>
+<table ><tbody><tr><td valign="top" ><h1 id="ReservedKeywords-A">A</h1><p>ABS<br />ALL<br />ALLOCATE<br />ALLOW<br />ALTER<br />AND<br />ANY<br />ARE<br />ARRAY<br />AS<br />ASENSITIVE<br />ASYMMETRIC<br />AT<br />ATOMIC<br />AUTHORIZATION<br />AVG</p><h1 id="ReservedKeywords-B">B</h1><p>BEGIN<br />BETWEEN<br />BIGINT<br />BINARY<br />BIT<br />BLOB<br />BOOLEAN<br />BOTH<br />BY</p><h1 id="ReservedKeywords-C">C</h1><p>CALL<br />CALLED<br />CARDINALITY<br />CASCADED<br />CASE<br />CAST<br />CEIL<br />CEILING<br />CHAR<br />CHARACTER<br />CHARACTER_LENGTH<br />CHAR_LENGTH<br />CHECK<br />CLOB<br />CLOSE<br />COALESCE<br />COLLATE<br />COLLECT<br />COLUMN<br />COMMIT<br />CONDITION<br />CONNECT<br />CONSTRAINT<br />CONVERT<br />CORR<br />CORRESPONDING<br />COUNT<br />COVAR_POP<br />COVAR_SAMP<br />CREATE<br />CROSS<br />CUBE<br />CUME_DIST<br />CURRENT<br />CURRENT_CATALOG<br />CURRENT_DATE<br />CURRENT_DEFAULT_TRANSFORM_GROUP<br />CURRENT_PATH<br />CURRENT_ROLE<br />CURRENT_SCHEMA<br 
 />CURRENT_TIME<br />CURRENT_TIMESTAMP<br />CURRENT_TRANSFORM_GROUP_FOR_TYPE<br />CURRENT_USER<br />CURSOR<br />CYCLE</p></td><td valign="top" ><h1 id="ReservedKeywords-D">D</h1><p>DATABASES<br />DATE<br />DAY<br />DEALLOCATE<br />DEC<br />DECIMAL<br />DECLARE<br />DEFAULT<br />DEFAULT_KW<br />DELETE<br />DENSE_RANK<br />DEREF<br />DESCRIBE<br />DETERMINISTIC<br />DISALLOW<br />DISCONNECT<br />DISTINCT<br />DOUBLE<br />DROP<br />DYNAMIC</p><h1 id="ReservedKeywords-E">E</h1><p>EACH<br />ELEMENT<br />ELSE<br />END<br />END_EXEC<br />ESCAPE<br />EVERY<br />EXCEPT<br />EXEC<br />EXECUTE<br />EXISTS<br />EXP<br />EXPLAIN<br />EXTERNAL<br />EXTRACT</p><h1 id="ReservedKeywords-F">F</h1><p>FALSE<br />FETCH<br />FILES<br />FILTER<br />FIRST_VALUE<br />FLOAT<br />FLOOR<br />FOR<br />FOREIGN<br />FREE<br />FROM<br />FULL<br />FUNCTION<br />FUSION</p><h1 id="ReservedKeywords-G">G</h1><p>GET<br />GLOBAL<br />GRANT<br />GROUP<br />GROUPING</p><h1 id="ReservedKeywords-H">H</h1><p>HAVING<br />HOLD<b
 r />HOUR</p></td><td valign="top" ><h1 id="ReservedKeywords-I">I</h1><p>IDENTITY<br />IF<br />IMPORT<br />IN<br />INDICATOR<br />INNER<br />INOUT<br />INSENSITIVE<br />INSERT<br />INT<br />INTEGER<br />INTERSECT<br />INTERSECTION<br />INTERVAL<br />INTO<br />IS</p><h1 id="ReservedKeywords-J">J</h1><p>JAR<br />JOIN</p><h1 id="ReservedKeywords-L">L</h1><p>LANGUAGE<br />LARGE<br />LAST_VALUE<br />LATERAL<br />LEADING<br />LEFT<br />LIKE<br />LIMIT<br />LN<br />LOCAL<br />LOCALTIME<br />LOCALTIMESTAMP<br />LOWER</p><h1 id="ReservedKeywords-M">M</h1><p>MATCH<br />MAX<br />MEMBER<br />MERGE<br />METHOD<br />MIN<br />MINUTE<br />MOD<br />MODIFIES<br />MODULE<br />MONTH<br />MULTISET</p><h1 id="ReservedKeywords-N">N</h1><p>NATIONAL<br />NATURAL<br />NCHAR<br />NCLOB<br />NEW<br />NO<br />NONE<br />NORMALIZE<br />NOT<br />NULL<br />NULLIF<br />NUMERIC</p><h1 id="ReservedKeywords-O">O</h1><p>OCTET_LENGTH<br />OF<br />OFFSET<br />OLD<br />ON<br />ONLY<br />OPEN<br />OR<br />ORDER<br />OUT<br /
 >OUTER<br />OVER<br />OVERLAPS<br />OVERLAY</p></td><td valign="top" colspan="1" ><h1 id="ReservedKeywords-P">P</h1><p>PARAMETER<br />PARTITION<br />PERCENTILE_CONT<br />PERCENTILE_DISC<br />PERCENT_RANK<br />POSITION<br />POWER<br />PRECISION<br />PREPARE<br />PRIMARY<br />PROCEDURE</p><h1 id="ReservedKeywords-R">R</h1><p>RANGE<br />RANK<br />READS<br />REAL<br />RECURSIVE<br />REF<br />REFERENCES<br />REFERENCING<br />REGR_AVGX<br />REGR_AVGY<br />REGR_COUNT<br />REGR_INTERCEPT<br />REGR_R2<br />REGR_SLOPE<br />REGR_SXX<br />REGR_SXY<br />RELEASE<br />REPLACE<br />RESULT<br />RETURN<br />RETURNS<br />REVOKE<br />RIGHT<br />ROLLBACK<br />ROLLUP<br />ROW<br />ROWS<br />ROW_NUMBER</p><h1 id="ReservedKeywords-S">S</h1><p>SAVEPOINT<br />SCHEMAS<br />SCOPE<br />SCROLL<br />SEARCH<br />SECOND<br />SELECT<br />SENSITIVE<br />SESSION_USER<br />SET<br />SHOW<br />SIMILAR<br />SMALLINT<br />SOME<br />SPECIFIC<br />SPECIFICTYPE<br />SQL<br />SQLEXCEPTION<br />SQLSTATE<br />SQLWARNING<br />SQR
 T<br />START<br />STATIC<br />STDDEV_POP<br />STDDEV_SAMP<br />SUBMULTISET<br />SUBSTRING<br />SUM<br />SYMMETRIC<br />SYSTEM<br />SYSTEM_USER</p></td><td valign="top" colspan="1" ><h1 id="ReservedKeywords-T">T</h1><p>TABLE<br />TABLES<br />TABLESAMPLE<br />THEN<br />TIME<br />TIMESTAMP<br />TIMEZONE_HOUR<br />TIMEZONE_MINUTE<br />TINYINT<br />TO<br />TRAILING<br />TRANSLATE<br />TRANSLATION<br />TREAT<br />TRIGGER<br />TRIM<br />TRUE</p><h1 id="ReservedKeywords-U">U</h1><p>UESCAPE<br />UNION<br />UNIQUE<br />UNKNOWN<br />UNNEST<br />UPDATE<br />UPPER<br />USE<br />USER<br />USING</p><h1 id="ReservedKeywords-V">V</h1><p>VALUE<br />VALUES<br />VARBINARY<br />VARCHAR<br />VARYING<br />VAR_POP<br />VAR_SAMP</p><h1 id="ReservedKeywords-W">W</h1><p>WHEN<br />WHENEVER<br />WHERE<br />WIDTH_BUCKET<br />WINDOW<br />WITH<br />WITHIN<br />WITHOUT</p><h1 id="ReservedKeywords-Y">Y</h1><p>YEAR</p></td></tr></tbody></table></div>
 

http://git-wip-us.apache.org/repos/asf/drill/blob/bb638b1d/blog/_posts/2016-11-17-drill-1.9-released.md
----------------------------------------------------------------------
diff --git a/blog/_posts/2016-11-17-drill-1.9-released.md b/blog/_posts/2016-11-17-drill-1.9-released.md
new file mode 100644
index 0000000..f97b51c
--- /dev/null
+++ b/blog/_posts/2016-11-17-drill-1.9-released.md
@@ -0,0 +1,26 @@
+---
+layout: post
+title: "Drill 1.9 Released"
+code: drill-1.9-released
+excerpt: Apache Drill 1.9's highlights are&#58; asynchronous Parquet reader, Parquet filter pushdown, and dynamic UDF support.
+authors: ["bbevens"]
+---
+
+Today, we're happy to announce the availability of Drill 1.9.0. You can download it [here](https://drill.apache.org/download/).
+
+The release provides the following bug fixes and improvements:
+
+## Asynchronous Parquet Reader 
+The new asynchronous Parquet reader feature improves the performance of the Parquet Scan operator by increasing the speed at which the Parquet reader scans, decompresses, and decodes data. See Asynchronous Parquet Reader. 
+
+## Parquet Filter Pushdown  
+The new Parquet filter pushdown feature optimizes Drill’s performance by pruning extraneous data from a Parquet file to reduce the amount of data that Drill scans and reads when a query on a Parquet file contains a filter expression. See Parquet Filter Pushdown.
+
+## Dynamic UDF Support  
+The new Dynamic UDF feature enables users to register and unregister UDFs on their own using the new CREATE FUNCTION USING JAR and DROP FUNCTION USING JAR commands. See Dynamic UDFs.  
+
+## HTTPD Format Plugin
+The new HTTPD format plugin adds the capability to query HTTP web server logs natively and also includes parse_url() and parse_query() UDFs. The parse_url() UDF returns maps of the URL. The parse_query() UDF returns the query string.  
+
+A complete list of JIRAs resolved in the 1.9.0 release can be found [here](https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12337861&styleName=Html&projectId=12313820&Create=Create&atl_token=A5KQ-2QAV-T4JA-FDED%7Ce3f48e86b488db564d324462cd5233d775c28018%7Clin).
+


Mime
View raw message