drill-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From bridg...@apache.org
Subject drill-site git commit: updates for 1.2
Date Thu, 01 Oct 2015 01:11:18 GMT
Repository: drill-site
Updated Branches:
  refs/heads/asf-site b3bd038bc -> 72602b140


updates for 1.2


Project: http://git-wip-us.apache.org/repos/asf/drill-site/repo
Commit: http://git-wip-us.apache.org/repos/asf/drill-site/commit/72602b14
Tree: http://git-wip-us.apache.org/repos/asf/drill-site/tree/72602b14
Diff: http://git-wip-us.apache.org/repos/asf/drill-site/diff/72602b14

Branch: refs/heads/asf-site
Commit: 72602b14073abdc8111ca7fb4b8fdf913029c8b1
Parents: b3bd038
Author: Bridget Bevens <bbevens@maprtech.com>
Authored: Wed Sep 30 18:11:01 2015 -0700
Committer: Bridget Bevens <bbevens@maprtech.com>
Committed: Wed Sep 30 18:11:01 2015 -0700

----------------------------------------------------------------------
 .../index.html                                  |   5 +
 docs/configure-drill-introduction/index.html    |   2 +-
 .../index.html                                  |   8 +-
 docs/data-type-conversion/index.html            |  10 +-
 docs/img/storage_plugin_config.png              | Bin 0 -> 52174 bytes
 docs/parquet-format/index.html                  |  30 ++-
 docs/plugin-configuration-basics/index.html     |  22 +-
 docs/querying-hbase/index.html                  | 265 +++++++++++--------
 docs/querying-hive/index.html                   |  10 +-
 docs/querying-parquet-files/index.html          |   5 +-
 docs/sql-extensions/index.html                  |  10 +-
 docs/start-up-options/index.html                |   2 +
 docs/starting-the-web-console/index.html        |  27 +-
 docs/storage-plugin-registration/index.html     |   2 +-
 docs/supported-data-types/index.html            |  13 +-
 feed.xml                                        |   4 +-
 16 files changed, 263 insertions(+), 152 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/drill-site/blob/72602b14/docs/configuration-options-introduction/index.html
----------------------------------------------------------------------
diff --git a/docs/configuration-options-introduction/index.html b/docs/configuration-options-introduction/index.html
index bb66fcf..ff53255 100644
--- a/docs/configuration-options-introduction/index.html
+++ b/docs/configuration-options-introduction/index.html
@@ -1285,6 +1285,11 @@ Drill sources the local <code>&lt;drill_installation_directory&gt;/conf</code> d
 <td>Output format for data written to tables with the CREATE TABLE AS (CTAS) command. Allowed values are parquet, json, psv, csv, or tsv.</td>
 </tr>
 <tr>
+<td>store.hive.optimize_scan_with_native_readers</td>
+<td>FALSE</td>
+<td>Optimize reads of Parquet-backed external tables from Hive by using Drill native readers instead of the Hive Serde interface. (Drill 1.2 and later)</td>
+</tr>
+<tr>
 <td>store.json.all_text_mode</td>
 <td>FALSE</td>
 <td>Drill reads all data from the JSON files as VARCHAR. Prevents schema change errors.</td>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/72602b14/docs/configure-drill-introduction/index.html
----------------------------------------------------------------------
diff --git a/docs/configure-drill-introduction/index.html b/docs/configure-drill-introduction/index.html
index a8caca3..bf2e3f9 100644
--- a/docs/configure-drill-introduction/index.html
+++ b/docs/configure-drill-introduction/index.html
@@ -1011,7 +1011,7 @@ statements is Parquet. Using a configuration option, you can modify Drill to sto
 
 <h2 id="query-profile-data-storage-configuration">Query Profile Data Storage Configuration</h2>
 
-<p>To enjoy a problem-free Drill Web Console experience, you need to <a href="/docs/persistent-configuration-storage/#configuring-zookeeper-pstore">configure the ZooKeeper PStore</a>.</p>
+<p>To avoid problems working with the Web Console, you need to <a href="/docs/persistent-configuration-storage/#configuring-zookeeper-pstore">configure the ZooKeeper PStore</a>.</p>
 
     
       

http://git-wip-us.apache.org/repos/asf/drill-site/blob/72602b14/docs/configuring-web-console-and-rest-api-security/index.html
----------------------------------------------------------------------
diff --git a/docs/configuring-web-console-and-rest-api-security/index.html b/docs/configuring-web-console-and-rest-api-security/index.html
index aebdb85..e0c815a 100644
--- a/docs/configuring-web-console-and-rest-api-security/index.html
+++ b/docs/configuring-web-console-and-rest-api-security/index.html
@@ -994,9 +994,13 @@ you can limit the access of certain users to Web Console functionality, such as
 
 <h2 id="https-support">HTTPS Support</h2>
 
-<p>Drill 1.2 uses the Linux Pluggable Authentication Module (PAM) and code-level support for transport layer security (TLS) to secure the Web Console and REST API. By default, the Web Console and REST API now support the HTTPS protocol.</p>
+<p>Drill 1.2 uses the Linux Pluggable Authentication Module (PAM) and code-level support for transport layer security (TLS) to secure the Web Console and REST API. By default, the Web Console and REST API support the HTTP protocol. You set the following start-up option to TRUE to enable HTTPS support:</p>
 
-<p>By default, Drill generates a self-signed certificate that works with SSL for HTTPS access to the Web Console. Because Drill uses a self-signed certificate, you see a warning in the browser when you go to <code>https://&lt;node IP address&gt;:8047</code>. The Chrome browser, for example, requires you to click <code>Advanced</code>, and then <code>Proceed to &lt;address&gt; (unsafe)</code>.  If you have a signed certificate by an authority, you can set up a custom SSL to avoid this warning. You can set up SSL to specify the keystore or truststore, or both, for your organization, as described in the next section.</p>
+<p><code>drill.exec.http.ssl_enabled</code></p>
+
+<p>By default this start-up option is set to FALSE.</p>
+
+<p>Drill generates a self-signed certificate that works with SSL for HTTPS access to the Web Console. Because Drill uses a self-signed certificate, you see a warning in the browser when you go to <code>https://&lt;node IP address&gt;:8047</code>. The Chrome browser, for example, requires you to click <code>Advanced</code>, and then <code>Proceed to &lt;address&gt; (unsafe)</code>. If you have a signed certificate by an authority, you can set up a custom SSL to avoid this warning. You can set up SSL to specify the keystore or truststore, or both, for your organization, as described in the next section.</p>
 
 <h2 id="setting-up-a-custom-ssl-configuration">Setting Up a Custom SSL Configuration</h2>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/72602b14/docs/data-type-conversion/index.html
----------------------------------------------------------------------
diff --git a/docs/data-type-conversion/index.html b/docs/data-type-conversion/index.html
index 0200e07..38aa949 100644
--- a/docs/data-type-conversion/index.html
+++ b/docs/data-type-conversion/index.html
@@ -1020,7 +1020,7 @@
 <p>See the following tables for information about the data types to use for casting:</p>
 
 <ul>
-<li><a href="/docs/supported-data-types/#convert_to-and-convert_from-data-types">CONVERT_TO and CONVERT_FROM Data Types</a></li>
+<li><a href="/docs/supported-data-types/#data-types-for-convert_to-and-convert_from-functions">CONVERT_TO and CONVERT_FROM Data Types</a></li>
 <li><a href="/docs/supported-data-types">Supported Data Types for Casting</a></li>
 <li><a href="/docs/supported-data-types/#explicit-type-casting-maps">Explicit Type Casting Maps</a></li>
 </ul>
@@ -1138,13 +1138,13 @@ CONVERT_FROM(column, type)
 </code></pre></div>
 <p><em>column</em> is the name of a column Drill reads.</p>
 
-<p><em>type</em> is one of the encoding types listed in the <a href="/docs/data-types#convert_to-and-convert_from-data-types">CONVERT_TO/FROM Data Types</a> table. </p>
+<p><em>type</em> is one of the encoding types listed in the <a href="/docs/supported-data-types/#data-types-for-convert_to-and-convert_from-functions">CONVERT_TO/FROM data types</a> table. </p>
 
 <h3 id="convert_to-and-convert_from-usage-notes">CONVERT_TO and CONVERT_FROM Usage Notes</h3>
 
 <p>CONVERT_FROM and CONVERT_TO methods transform a known binary representation/encoding to a Drill internal format. Use CONVERT_TO and CONVERT_FROM instead of the CAST function for converting binary data types. CONVERT_TO/FROM functions work for data in a binary representation and are more efficient to use than CAST. </p>
 
-<p>Drill can optimize scans of HBase tables when you use the *_BE encoded types shown in section  <a href="/docs/supported-data-types/#convert_to-and-convert_from-data-types">&quot;CONVERT_TO and CONVERT_FROM Data Types&quot;</a> on big endian-encoded data. You need to use the HBase storage plugin and query data as described in <a href="/docs/querying-hbase">&quot;Querying Hbase&quot;</a>. To write Parquet binary data, convert SQL data <em>to</em> binary data and store the data in a Parquet table while creating a table as a selection (CTAS).</p>
+<p>Drill can optimize scans of HBase tables when you use the *_BE encoded types shown in section  <a href="/docs/supported-data-types/#data-types-for-convert_to-and-convert_from-functions">&quot;Data Types for CONVERT_TO and CONVERT_FROM Functions&quot;</a> on big endian-encoded data. You need to use the HBase storage plugin and query data as described in <a href="/docs/querying-hbase">&quot;Querying Hbase&quot;</a>. To write Parquet binary data, convert SQL data <em>to</em> binary data and store the data in a Parquet table while creating a table as a selection (CTAS).</p>
 
 <p>CONVERT_TO also converts an SQL data type to complex types, including HBase byte arrays, JSON and Parquet arrays, and maps. CONVERT_FROM converts from complex types, including HBase arrays, JSON and Parquet arrays and maps to an SQL data type. </p>
 
@@ -1167,7 +1167,7 @@ SELECT * FROM students;
 +-------------+---------------------+---------------------------------------------------------------------------+
 4 rows selected (1.335 seconds)
 </code></pre></div>
-<p>You use the CONVERT_FROM function to decode the binary data, selecting a data type to use from the <a href="/docs/data-type-conversion/#convert_to-and-convert_from-data-types">list of supported types</a>. JSON supports strings. To convert bytes to strings, use the UTF8 type:</p>
+<p>You use the CONVERT_FROM function to decode the binary data, selecting a data type to use from the <a href="/docs/supported-data-types/#data-types-for-convert_to-and-convert_from-functions">list of supported types</a>. JSON supports strings. To convert bytes to strings, use the UTF8 type:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">SELECT CONVERT_FROM(row_key, &#39;UTF8&#39;) AS studentid, 
        CONVERT_FROM(students.account.name, &#39;UTF8&#39;) AS name, 
        CONVERT_FROM(students.address.state, &#39;UTF8&#39;) AS state, 
@@ -1251,7 +1251,7 @@ SELECT * FROM students;
 
 <h3 id="convert-the-binary-hbase-students-table-to-json-data">Convert the Binary HBase Students Table to JSON Data</h3>
 
-<p>First, you set the storage format to JSON. Next, you use the CREATE TABLE AS (CTAS) statement to convert from a selected file of a different format, HBase in this example, to the storage format. You then convert the JSON file to Parquet using a similar procedure. Set the storage format to Parquet, and use a CTAS statement to convert to Parquet from JSON. In each case, you <a href="/docs/data-type-conversion/#convert_to-and-convert_from-data-types">select UTF8</a> as the file format because the data you are converting from and then to consists of strings.</p>
+<p>First, you set the storage format to JSON. Next, you use the CREATE TABLE AS (CTAS) statement to convert from a selected file of a different format, HBase in this example, to the storage format. You then convert the JSON file to Parquet using a similar procedure. Set the storage format to Parquet, and use a CTAS statement to convert to Parquet from JSON. In each case, you <a href="/docs/supported-data-types/#data-types-for-convert_to-and-convert_from-functions">select UTF8</a> as the file format because the data you are converting from and then to consists of strings.</p>
 
 <ol>
 <li><p>Start Drill on the Drill Sandbox and set the default storage format from Parquet to JSON.</p>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/72602b14/docs/img/storage_plugin_config.png
----------------------------------------------------------------------
diff --git a/docs/img/storage_plugin_config.png b/docs/img/storage_plugin_config.png
new file mode 100644
index 0000000..966dc98
Binary files /dev/null and b/docs/img/storage_plugin_config.png differ

http://git-wip-us.apache.org/repos/asf/drill-site/blob/72602b14/docs/parquet-format/index.html
----------------------------------------------------------------------
diff --git a/docs/parquet-format/index.html b/docs/parquet-format/index.html
index 0185973..ba5556a 100644
--- a/docs/parquet-format/index.html
+++ b/docs/parquet-format/index.html
@@ -1042,20 +1042,22 @@ query. </p>
 
 <p>Use the <code>store.format</code> option to set the CTAS output format of a Parquet row group at the session or system level.</p>
 
-<p>Use the ALTER command to set the <code>store.format</code> option.</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">    ALTER SESSION SET `store.format` = &#39;parquet&#39;;
-    ALTER SYSTEM SET `store.format` = &#39;parquet&#39;;
-</code></pre></div>
+<p>Use the ALTER command to set the <code>store.format</code> option.  </p>
+
+<p><code>ALTER SESSION SET `store.format` = &#39;parquet&#39;;</code><br>
+<code>ALTER SYSTEM SET `store.format` = &#39;parquet&#39;;</code>  </p>
+
 <h3 id="configuring-the-size-of-parquet-files">Configuring the Size of Parquet Files</h3>
 
 <p>Configuring the size of Parquet files by setting the <code>store.parquet.block-size</code> can improve write performance. The block size is the size of MFS, HDFS, or the file system. </p>
 
 <p>The larger the block size, the more memory Drill needs for buffering data. Parquet files that contain a single block maximize the amount of data Drill stores contiguously on disk. Given a single row group per file, Drill stores the entire Parquet file onto the block, avoiding network I/O.</p>
 
-<p>To maximize performance, set the target size of a Parquet row group to the number of bytes less than or equal to the block size of MFS, HDFS, or the file system by using the <code>store.parquet.block-size</code>:         </p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">    ALTER SESSION SET `store.parquet.block-size` = 536870912;         
-    ALTER SYSTEM SET `store.parquet.block-size` = 536870912  
-</code></pre></div>
+<p>To maximize performance, set the target size of a Parquet row group to the number of bytes less than or equal to the block size of MFS, HDFS, or the file system by using the <code>store.parquet.block-size</code>:  </p>
+
+<p><code>ALTER SESSION SET `store.parquet.block-size` = 536870912;</code><br>
+<code>ALTER SYSTEM SET `store.parquet.block-size` = 536870912</code>  </p>
+
 <p>The default block size is 536870912 bytes.</p>
 
 <h3 id="type-mapping">Type Mapping</h3>
@@ -1171,12 +1173,22 @@ FROM dfs.tmp.`sampleparquet` t;
 <td>4-byte signed integer</td>
 </tr>
 <tr>
-<td>None</td>
+<td>None*</td>
 <td>INT96</td>
 <td>12-byte signed int</td>
 </tr>
 </tbody></table>
 
+<p>* Drill 1.2 and later supports reading the Parquet INT96 type.</p>
+
+<h2 id="about-int96-support">About INT96 Support</h2>
+
+<p>Drill 1.2 and later supports reading the Parquet INT96 type. For example, to decode a timestamp from Hive or Impala, which is of type INT96, use the CONVERT_FROM function and the <a href="/docs/supported-data-types/#data-types-for-convert_to-and-convert_from-functions">TIMESTAMP_IMPALA</a> type argument:</p>
+
+<p><code>SELECT CONVERT_FROM(timestamp_field, &#39;TIMESTAMP_IMPALA&#39;) as timestamp_field FROM `dfs.file_with_timestamp.parquet`;</code></p>
+
+<p>Because INT96 is supported for reads only, you cannot use the TIMESTAMP_IMPALA as a data type argument with CONVERT_TO.</p>
+
 <h3 id="sql-types-to-parquet-logical-types">SQL Types to Parquet Logical Types</h3>
 
 <p>Parquet also supports logical types, fully described on the <a href="https://github.com/Parquet/parquet-format/blob/master/LogicalTypes.md">Apache Parquet site</a>. Embedded types, JSON and BSON, annotate a binary primitive type representing a JSON or BSON document. The logical types and their mapping to SQL types are:</p>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/72602b14/docs/plugin-configuration-basics/index.html
----------------------------------------------------------------------
diff --git a/docs/plugin-configuration-basics/index.html b/docs/plugin-configuration-basics/index.html
index 537e7fb..4aaa4da 100644
--- a/docs/plugin-configuration-basics/index.html
+++ b/docs/plugin-configuration-basics/index.html
@@ -996,17 +996,27 @@ restart any of the Drillbits when you add or update a storage plugin configurati
 
 <h2 id="using-the-drill-web-console">Using the Drill Web Console</h2>
 
-<p>Assuming you are <a href="/docs/configuring-web-console-and-rest-api-security/">authorized</a>, you can use the Drill Web Console to update or add a new storage plugin configuration. The Drill shell needs to be running to access the Drill Web Console. In Drill 1.2 and later, to open the Drill Web Console, launch a web browser, and go to: <code>https://&lt;IP address or host name&gt;:8047</code> of any Drillbit in the cluster. In Drill 1.1 and earlier, use <code>http</code> instead of <code>https</code>. Select the Storage tab to view, update, or add a new storage plugin configuration. </p>
+<p>You can use the Drill Web Console to update or add a new storage plugin configuration. The Drill shell needs to be running to start the Web Console. </p>
 
 <p>To create a name and new configuration:</p>
 
 <ol>
-<li>Enter a name in <strong>New Storage Plugin</strong>.
+<li><a href="/docs/starting-drill-on-linux-and-mac-os-x/">Start the Drill shell</a>.<br></li>
+<li><a href="/docs/starting-the-web-console/">Start the Web Console</a>.<br>
+The storage tab appears on the Web Console if you are <a href="/docs/configuring-web-console-and-rest-api-security/">authorized</a> to view, update, or add storage plugins.<br></li>
+<li><p>On the Storage tab, enter a name in <strong>New Storage Plugin</strong>.
 Each configuration registered with Drill must have a distinct
-name. Names are case-sensitive.</li>
-<li>Click <strong>Create</strong>.<br></li>
-<li>In Configuration, it is recommended that you modify a copy of an existing configuration if possible. Reconfigure attributes of the storage plugin using JSON formatting. The Storage Plugin Attributes table in the next section describes attributes typically reconfigured by users. </li>
-<li>Click <strong>Create</strong>.</li>
+name. Names are case-sensitive.<br>
+ <img src="/docs/img/storage_plugin_config.png" alt="sandbox plugin"></p>
+
+<div class="admonition note">
+<p class="first admonition-title">Note</p>
+<p class="last">The URL differs depending on your installation and configuration.  </p>
+</div>  </li>
+<li><p>Click <strong>Create</strong>.  </p></li>
+<li><p>In Configuration, use JSON formatting to modify a copy of an existing configuration if possible.<br>
+Using a copy of an existing configuration reduces the risk of JSON coding errors. Use the Storage Plugin Attributes table in the next section as a guide for making typical modifications.  </p></li>
+<li><p>Click <strong>Create</strong>.</p></li>
 </ol>
 
 <h2 id="storage-plugin-attributes">Storage Plugin Attributes</h2>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/72602b14/docs/querying-hbase/index.html
----------------------------------------------------------------------
diff --git a/docs/querying-hbase/index.html b/docs/querying-hbase/index.html
index d67b286..b3e383f 100644
--- a/docs/querying-hbase/index.html
+++ b/docs/querying-hbase/index.html
@@ -987,130 +987,36 @@
 
     <div class="int_text" align="left">
       
-        <p>To use Drill to query HBase data, you need to understand how to work with the HBase byte arrays. If you want Drill to interpret the underlying HBase row key as something other than a byte array, you need to know the encoding of the data in HBase. By default, HBase stores data in little endian and Drill assumes the data is little endian, which is unsorted. The following table shows the sorting of typical row key IDs in bytes, encoded in little endian and big endian, respectively:</p>
-
-<table><thead>
-<tr>
-<th>IDs in Byte Notation Little Endian Sorting</th>
-<th>IDs in Decimal Notation</th>
-<th>IDs in Byte Notation Big Endian Sorting</th>
-<th>IDs in Decimal Notation</th>
-</tr>
-</thead><tbody>
-<tr>
-<td>0 x 010000 . . . 000</td>
-<td>1</td>
-<td>0 x 00000001</td>
-<td>1</td>
-</tr>
-<tr>
-<td>0 x 010100 . . . 000</td>
-<td>17</td>
-<td>0 x 00000002</td>
-<td>2</td>
-</tr>
-<tr>
-<td>0 x 020000 . . . 000</td>
-<td>2</td>
-<td>0 x 00000003</td>
-<td>3</td>
-</tr>
-<tr>
-<td>. . .</td>
-<td></td>
-<td>0 x 00000004</td>
-<td>4</td>
-</tr>
-<tr>
-<td>0 x 050000 . . . 000</td>
-<td>5</td>
-<td>0 x 00000005</td>
-<td>5</td>
-</tr>
-<tr>
-<td>. . .</td>
-<td></td>
-<td>. . .</td>
-<td></td>
-</tr>
-<tr>
-<td>0 x 0A0000 . . . 000</td>
-<td>10</td>
-<td>0 x 0000000A</td>
-<td>10</td>
-</tr>
-<tr>
-<td></td>
-<td></td>
-<td>0 x 00000101</td>
-<td>17</td>
-</tr>
-</tbody></table>
-
-<h2 id="querying-big-endian-encoded-data">Querying Big Endian-Encoded Data</h2>
-
-<p>Drill optimizes scans of HBase tables when you use the <a href="/docs/supported-data-types/#convert_to-and-convert_from-data-types">&quot;CONVERT_TO and CONVERT_FROM data types&quot;</a> on big endian-encoded data. Drill provides the *_BE encoded types for use with CONVERT_TO and CONVERT_FROM to take advantage of these optimizations. Here are a few examples of the *_BE types.</p>
+        <p>This section covers the following topics:</p>
 
 <ul>
-<li>DATE_EPOCH_BE<br></li>
-<li>TIME_EPOCH_BE<br></li>
-<li>TIMESTAMP_EPOCH_BE<br></li>
-<li>UINT8_BE<br></li>
-<li>BIGINT_BE<br></li>
+<li><a href="/docs/querying-hbase/#tutorial-querying-hbase-data">Tutorial--Querying HBase Data</a><br>
+A simple tutorial that shows how to use Drill to query HBase data.<br></li>
+<li><a href="/docs/querying-hbase/#working-with-hbase-byte-arrays">Working with HBase Byte Arrays</a><br>
+How to work with HBase byte arrays for serious applications<br></li>
+<li><a href="/docs/querying-hbase/#querying-big-endian-encoded-data">Querying Big Endian-Encoded Data</a><br>
+How to use optimization features in Drill 1.2 and later<br></li>
+<li><a href="/docs/querying-hbase/#leveraging-hbase-ordered-byte-encoding">Leveraging HBase Ordered Byte Encoding</a><br>
+How to use Drill 1.2 to leverage new features introduced by <a href="https://issues.apache.org/jira/browse/HBASE-8201">HBASE-8201 Jira</a></li>
 </ul>
 
-<p>For example, Drill returns results performantly when you use the following query on big endian-encoded data:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">SELECT
-  CONVERT_FROM(BYTE_SUBSTR(row_key, 1, 8), &#39;DATE_EPOCH_BE&#39;) d,
-  CONVERT_FROM(BYTE_SUBSTR(row_key, 9, 8), &#39;BIGINT_BE&#39;) id,
-  CONVERT_FROM(tableName.f.c, &#39;UTF8&#39;) 
-FROM hbase.`TestTableCompositeDate` tableName
-WHERE
-  CONVERT_FROM(BYTE_SUBSTR(row_key, 1, 8), &#39;DATE_EPOCH_BE&#39;) &lt; DATE &#39;2015-06-18&#39; AND
-  CONVERT_FROM(BYTE_SUBSTR(row_key, 1, 8), &#39;DATE_EPOCH_BE&#39;) &gt; DATE &#39;2015-06-13&#39;;
-</code></pre></div>
-<p>This query assumes that the row key of the table represents the DATE_EPOCH type encoded in big-endian format. The Drill HBase plugin will be able to prune the scan range since there is a condition on the big endian-encoded prefix of the row key. For more examples, see the <a href="https://github.com/apache/drill/blob/95623912ebf348962fe8a8846c5f47c5fdcf2f78/contrib/storage-hbase/src/test/java/org/apache/drill/hbase/TestHBaseFilterPushDown.java">test code</a>.</p>
+<h2 id="tutorial--querying-hbase-data">Tutorial--Querying HBase Data</h2>
 
-<p>To query HBase data:</p>
+<p>This tutorial shows how to connect Drill to an HBase data source, create simple HBase tables, and query the data using Drill.</p>
 
-<ol>
-<li>Connect the data source to Drill using the <a href="/docs/hbase-storage-plugin/">HBase storage plugin</a>.<br></li>
-<li>Determine the encoding of the HBase data you want to query. Ask the person in charge of creating the data.<br></li>
-<li><p>Based on the encoding type of the data, use the <a href="/docs/supported-data-types/#convert_to-and-convert_from-data-types">&quot;CONVERT_TO and CONVERT_FROM data types&quot;</a> to convert HBase binary representations to an SQL type as you query the data.<br>
-For example, use CONVERT_FROM in your Drill query to convert a big endian-encoded row key to an SQL BIGINT type:  </p>
+<hr>
 
-<p><code>SELECT CONVERT_FROM(BYTE_SUBSTR(row_key, 1, 8),&#39;BIGINT_BE’) FROM my_hbase_table;</code></p></li>
-</ol>
+<h3 id="configure-the-hbase-storage-plugin">Configure the HBase Storage Plugin</h3>
 
-<p>The <a href="/docs/string-manipulation/#byte_substr">BYTE_SUBSTR function</a> separates parts of a HBase composite key in this example. The Drill optimization is based on the capability in Drill 1.2 and later to push conditional filters down to the storage layer when HBase data is in big endian format. </p>
+<p>To query an HBase data source using Drill, first configure the <a href="/docs/hbase-storage-plugin/">HBase storage plugin</a> for your environment. </p>
 
-<p>Drill can performantly query HBase data that uses composite keys, as shown in the last example, if only the first component of the composite is encoded in big endian format. If the HBase row key is not stored in big endian, do not use the *_BE types. If you want to convert a little endian byte array to integer, use BIGINT instead of BIGINT_BE, for example, as an argument to CONVERT_FROM. </p>
-
-<h2 id="leveraging-hbase-ordered-byte-encoding">Leveraging HBase Ordered Byte Encoding</h2>
-
-<p>Drill 1.2 leverages new features introduced by <a href="https://issues.apache.org/jira/browse/HBASE-8201">HBASE-8201 Jira</a> that allows ordered byte encoding of different data types. This encoding scheme preserves the sort order of the native data type when the data is stored as sorted byte arrays on disk. Thus, Drill will be able to process data through the HBase storage plugin if the row keys have been encoded in OrderedBytes format.</p>
-
-<p>To execute the following query, Drill prunes the scan range to only include the row keys representing [-32,59] range, thus reducing the amount of data read.</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">SELECT
- CONVERT_FROM(t.row_key, &#39;INT_OB&#39;) rk,
- CONVERT_FROM(t.`f`.`c`, &#39;UTF8&#39;) val
-FROM
-  hbase.`TestTableIntOB` t
-WHERE
-  CONVERT_FROM(row_key, &#39;INT_OB&#39;) &gt;= cast(-32 as INT) AND
-  CONVERT_FROM(row_key, &#39;INT_OB&#39;) &lt; cast(59 as INT);
-</code></pre></div>
-<p>For more examples, see the <a href="https://github.com/apache/drill/blob/95623912ebf348962fe8a8846c5f47c5fdcf2f78/contrib/storage-hbase/src/test/java/org/apache/drill/hbase/TestHBaseFilterPushDown.java">test code</a>.</p>
-
-<p>By taking advantage of ordered byte encoding, Drill 1.2 and later can performantly execute conditional queries without a secondary index on HBase big endian data. </p>
-
-<h2 id="querying-little-endian-encoded-data">Querying Little Endian-Encoded Data</h2>
-
-<p>As mentioned earlier, HBase stores data in little endian by default and Drill assumes the data is encoded in little endian. This exercise involves working with data that is encoded in little endian. First, you create two tables in HBase, students and clicks, that you can query with Drill. You use the CONVERT_TO and CONVERT_FROM functions to convert binary text to/from typed data. You use the CAST function to convert the binary data to an INT in step 4 of <a href="/docs/querying-hbase/#query-hbase-tables">Query HBase Tables</a>. When converting an INT or BIGINT number, having a byte count in the destination/source that does not match the byte count of the number in the binary source/destination, use CAST.</p>
+<hr>
 
 <h3 id="create-the-hbase-tables">Create the HBase tables</h3>
 
-<p>To create the HBase tables and start Drill, complete the following
+<p>You create two tables in HBase, students and clicks, that you can query with Drill. You use the CONVERT_TO and CONVERT_FROM functions to convert binary text to/from typed data. You use the CAST function to convert the binary data to an INT in step 4 of <a href="/docs/querying-hbase/#query-hbase-tables">Query HBase Tables</a>. When converting an INT or BIGINT number, having a byte count in the destination/source that does not match the byte count of the number in the binary source/destination, use CAST.</p>
+
+<p>To create the HBase tables, complete the following
 steps:</p>
 
 <ol>
@@ -1120,7 +1026,7 @@ steps:</p>
 </code></pre></div></li>
 <li><p>Issue the following command to create a <code>testdata.txt</code> file:</p>
 
-<p>cat &gt; testdata.txt</p></li>
+<p><code>cat &gt; testdata.txt</code></p></li>
 <li><p>Copy and paste the following <code>put</code> commands on the line below the <strong>cat</strong> command. Press Return, and then CTRL Z to close the file.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">put &#39;students&#39;,&#39;student1&#39;,&#39;account:name&#39;,&#39;Alice&#39;
 put &#39;students&#39;,&#39;student1&#39;,&#39;address:street&#39;,&#39;123 Ballmer Av&#39;
@@ -1189,13 +1095,19 @@ put &#39;clicks&#39;,&#39;click9&#39;,&#39;iteminfo:quantity&#39;,&#39;10&#39;
 </code></pre></div></li>
 </ol>
 
+<hr>
+
 <h3 id="query-hbase-tables">Query HBase Tables</h3>
 
+<p><a href="/docs/installing-drill-in-embedded-mode/">Start Drill</a> and complete the following steps to query the HBase tables you created.</p>
+
 <ol>
-<li><p>Issue the following query to see the data in the students table:  </p>
+<li>Use the HBase storage plugin configuration.<br>
+<code>USE hbase;</code><br></li>
+<li><p>Issue the following query to see the data in the students table:<br>
+<code>SELECT * FROM students;</code>  </p>
 
-<p>SELECT * FROM students;
-The query returns results that are not useable. In the next step, you convert the data.</p>
+<p>The query returns results that are not useable. In the next step, you convert the data from byte arrays to UTF8 types that are meaningful.</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">+-------------+-----------------------+---------------------------------------------------------------------------+
 |  row_key    |  account              |                                address                                    |
 +-------------+-----------------------+---------------------------------------------------------------------------+
@@ -1261,6 +1173,127 @@ FROM clicks tbl WHERE tbl.iteminfo.quantity=100;
 </code></pre></div></li>
 </ol>
 
+<h2 id="working-with-hbase-byte-arrays">Working with HBase Byte Arrays</h2>
+
+<p>The trivial example in the previous section queried little endian-encoded data in HBase. For serious applications, you need to understand how to work with HBase byte arrays. If you want Drill to interpret the underlying HBase row key as something other than a byte array, you need to know the encoding of the data in HBase. By default, HBase stores data in little endian and Drill assumes the data is little endian, which is unsorted. The following table shows the sorting of typical row key IDs in bytes, encoded in little endian and big endian, respectively:</p>
+
+<table><thead>
+<tr>
+<th>IDs in Byte Notation Little Endian Sorting</th>
+<th>IDs in Decimal Notation</th>
+<th>IDs in Byte Notation Big Endian Sorting</th>
+<th>IDs in Decimal Notation</th>
+</tr>
+</thead><tbody>
+<tr>
+<td>0 x 010000 . . . 000</td>
+<td>1</td>
+<td>0 x 00000001</td>
+<td>1</td>
+</tr>
+<tr>
+<td>0 x 010100 . . . 000</td>
+<td>17</td>
+<td>0 x 00000002</td>
+<td>2</td>
+</tr>
+<tr>
+<td>0 x 020000 . . . 000</td>
+<td>2</td>
+<td>0 x 00000003</td>
+<td>3</td>
+</tr>
+<tr>
+<td>. . .</td>
+<td></td>
+<td>0 x 00000004</td>
+<td>4</td>
+</tr>
+<tr>
+<td>0 x 050000 . . . 000</td>
+<td>5</td>
+<td>0 x 00000005</td>
+<td>5</td>
+</tr>
+<tr>
+<td>. . .</td>
+<td></td>
+<td>. . .</td>
+<td></td>
+</tr>
+<tr>
+<td>0 x 0A0000 . . . 000</td>
+<td>10</td>
+<td>0 x 0000000A</td>
+<td>10</td>
+</tr>
+<tr>
+<td></td>
+<td></td>
+<td>0 x 00000101</td>
+<td>17</td>
+</tr>
+</tbody></table>
+
+<h2 id="querying-big-endian-encoded-data">Querying Big Endian-Encoded Data</h2>
+
+<p>Drill optimizes scans of HBase tables when you use the <a href="/docs/supported-data-types/#data-types-for-convert_to-and-convert_from-functions">&quot;CONVERT_TO and CONVERT_FROM data types&quot;</a> on big endian-encoded data. Drill provides the *_BE encoded types for use with CONVERT_TO and CONVERT_FROM to take advantage of these optimizations. Here are a few examples of the *_BE types.</p>
+
+<ul>
+<li>DATE_EPOCH_BE<br></li>
+<li>TIME_EPOCH_BE<br></li>
+<li>TIMESTAMP_EPOCH_BE<br></li>
+<li>UINT8_BE<br></li>
+<li>BIGINT_BE<br></li>
+</ul>
+
+<p>For example, Drill returns results performantly when you use the following query on big endian-encoded data:</p>
+<div class="highlight"><pre><code class="language-text" data-lang="text">SELECT
+  CONVERT_FROM(BYTE_SUBSTR(row_key, 1, 8), &#39;DATE_EPOCH_BE&#39;) d,
+  CONVERT_FROM(BYTE_SUBSTR(row_key, 9, 8), &#39;BIGINT_BE&#39;) id,
+  CONVERT_FROM(tableName.f.c, &#39;UTF8&#39;) 
+FROM hbase.`TestTableCompositeDate` tableName
+WHERE
+  CONVERT_FROM(BYTE_SUBSTR(row_key, 1, 8), &#39;DATE_EPOCH_BE&#39;) &lt; DATE &#39;2015-06-18&#39; AND
+  CONVERT_FROM(BYTE_SUBSTR(row_key, 1, 8), &#39;DATE_EPOCH_BE&#39;) &gt; DATE &#39;2015-06-13&#39;;
+</code></pre></div>
+<p>This query assumes that the row key of the table represents the DATE_EPOCH type encoded in big-endian format. The Drill HBase plugin will be able to prune the scan range since there is a condition on the big endian-encoded prefix of the row key. For more examples, see the <a href="https://github.com/apache/drill/blob/95623912ebf348962fe8a8846c5f47c5fdcf2f78/contrib/storage-hbase/src/test/java/org/apache/drill/hbase/TestHBaseFilterPushDown.java">test code</a>.</p>
+
+<p>To query HBase data:</p>
+
+<ol>
+<li><p>Connect the data source to Drill using the <a href="/docs/hbase-storage-plugin/">HBase storage plugin</a>.  </p>
+
+<p><code>USE hbase;</code></p></li>
+<li><p>Determine the encoding of the HBase data you want to query. Ask the person in charge of creating the data.  </p></li>
+<li><p>Based on the encoding type of the data, use the <a href="/docs/supported-data-types/#data-types-for-convert_to-and-convert_from-functions">&quot;CONVERT_TO and CONVERT_FROM data types&quot;</a> to convert HBase binary representations to an SQL type as you query the data.<br>
+For example, use CONVERT_FROM in your Drill query to convert a big endian-encoded row key to an SQL BIGINT type:  </p>
+
+<p><code>SELECT CONVERT_FROM(BYTE_SUBSTR(row_key, 1, 8),&#39;BIGINT_BE’) FROM my_hbase_table;</code></p></li>
+</ol>
+
+<p>The <a href="/docs/string-manipulation/#byte_substr">BYTE_SUBSTR function</a> separates parts of a HBase composite key in this example. The Drill optimization is based on the capability in Drill 1.2 and later to push conditional filters down to the storage layer when HBase data is in big endian format. </p>
+
+<p>Drill can performantly query HBase data that uses composite keys, as shown in the last example, if only the first component of the composite is encoded in big endian format. If the HBase row key is not stored in big endian, do not use the *_BE types. If you want to convert a little endian byte array to integer, use BIGINT instead of BIGINT_BE, for example, as an argument to CONVERT_FROM. </p>
+
+<h2 id="leveraging-hbase-ordered-byte-encoding">Leveraging HBase Ordered Byte Encoding</h2>
+
+<p>Drill 1.2 leverages new features introduced by <a href="https://issues.apache.org/jira/browse/HBASE-8201">HBASE-8201 Jira</a> that allows ordered byte encoding of different data types. This encoding scheme preserves the sort order of the native data type when the data is stored as sorted byte arrays on disk. Thus, Drill will be able to process data through the HBase storage plugin if the row keys have been encoded in OrderedBytes format.</p>
+
+<p>To execute the following query, Drill prunes the scan range to only include the row keys representing [-32,59] range, thus reducing the amount of data read.</p>
+<div class="highlight"><pre><code class="language-text" data-lang="text">SELECT
+ CONVERT_FROM(t.row_key, &#39;INT_OB&#39;) rk,
+ CONVERT_FROM(t.`f`.`c`, &#39;UTF8&#39;) val
+FROM
+  hbase.`TestTableIntOB` t
+WHERE
+  CONVERT_FROM(row_key, &#39;INT_OB&#39;) &gt;= cast(-32 as INT) AND
+  CONVERT_FROM(row_key, &#39;INT_OB&#39;) &lt; cast(59 as INT);
+</code></pre></div>
+<p>For more examples, see the <a href="https://github.com/apache/drill/blob/95623912ebf348962fe8a8846c5f47c5fdcf2f78/contrib/storage-hbase/src/test/java/org/apache/drill/hbase/TestHBaseFilterPushDown.java">test code</a>.</p>
+
+<p>By taking advantage of ordered byte encoding, Drill 1.2 and later can performantly execute conditional queries without a secondary index on HBase big endian data. </p>
+
     
       
         <div class="doc-nav">

http://git-wip-us.apache.org/repos/asf/drill-site/blob/72602b14/docs/querying-hive/index.html
----------------------------------------------------------------------
diff --git a/docs/querying-hive/index.html b/docs/querying-hive/index.html
index 6a40f7e..3257d21 100644
--- a/docs/querying-hive/index.html
+++ b/docs/querying-hive/index.html
@@ -995,8 +995,8 @@ download the <a href="http://doc.mapr.com/download/attachments/28868943/customer
 
 <ol>
 <li><p>Issue the following command to start the Hive shell:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">hive
-</code></pre></div></li>
+
+<p><code>hive</code></p></li>
 <li><p>Issue the following command from the Hive shell create a table schema:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">hive&gt; create table customers(FirstName string, LastName string, Company string, Address string, City string, County string, State string, Zip string, Phone string, Fax string, Email string, Web string) row format delimited fields terminated by &#39;,&#39; stored as textfile;
 </code></pre></div></li>
@@ -1028,6 +1028,12 @@ download the <a href="http://doc.mapr.com/download/attachments/28868943/customer
 </code></pre></div></li>
 </ol>
 
+<h2 id="optimizing-reads-of-parquet-backed-tables">Optimizing Reads of Parquet-Backed Tables</h2>
+
+<p>Use the <code>store.hive.optimize_scan_with_native_readers</code> option to optimize reads of Parquet-backed external tables from Hive. When set to TRUE, this option uses Drill native readers instead of the Hive Serde interface, resulting in more performant queries of Parquet-backed external tables. (Drill 1.2 and later)</p>
+
+<p>Set the <code>store.hive.optimize_scan_with_native_readers</code> option as described in the section, <a href="/docs/planning-and-execution-options/">&quot;Planning and Execution Options&quot;</a>.</p>
+
     
       
         <div class="doc-nav">

http://git-wip-us.apache.org/repos/asf/drill-site/blob/72602b14/docs/querying-parquet-files/index.html
----------------------------------------------------------------------
diff --git a/docs/querying-parquet-files/index.html b/docs/querying-parquet-files/index.html
index 4f1c92a..0aa6e57 100644
--- a/docs/querying-parquet-files/index.html
+++ b/docs/querying-parquet-files/index.html
@@ -990,8 +990,9 @@
     <div class="int_text" align="left">
       
         <p>Drill 1.2 and later extends SQL for performant querying of a large number, thousands or more, of Parquet files. By running the following command, you trigger the generation of metadata files in the directory of Parquet files and its subdirectories:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">REFRESH TABLE METADATA &lt;path to table&gt;
-</code></pre></div>
+
+<p><code>REFRESH TABLE METADATA &lt;path to table&gt;</code></p>
+
 <p>You need to run the command on a file or directory only once during the session. Subsequent queries return results quickly because Drill refers to the metadata saved in the cache, as described in <a href="/docs/parquet-format/#reading-parquet-files">Reading Parquet Files</a>. </p>
 
 <p>You can query nested directories from any level. For example, you can query a sub-sub-directory of Parquet files because Drill stores a metadata cache of information at each level that covers that particular level and all lower levels. </p>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/72602b14/docs/sql-extensions/index.html
----------------------------------------------------------------------
diff --git a/docs/sql-extensions/index.html b/docs/sql-extensions/index.html
index c6674c6..bd44ac3 100644
--- a/docs/sql-extensions/index.html
+++ b/docs/sql-extensions/index.html
@@ -987,12 +987,20 @@
 
     <div class="int_text" align="left">
       
-        <p>Drill extends SQL to work with Hadoop-scale data and to explore smaller-scale data in ways not possible with SQL. Using intuitive SQL extensions you work with self-describing data and complex data types. Extensions to SQL include capabilities for exploring self-describing data, such as files and HBase, directly in the native format.</p>
+        <p>Drill extends SQL to generating Parquet metadata, to work with Hadoop-scale data, and to explore smaller-scale data in ways not possible with SQL. Using intuitive SQL extensions you work with self-describing data and complex data types. Extensions to SQL include capabilities for exploring self-describing data, such as files and HBase, directly in the native format.</p>
 
 <p>Drill provides language support for pointing to <a href="/docs/connect-a-data-source-introduction">storage plugin</a> interfaces that Drill uses to interact with data sources. Use the name of a storage plugin to specify a file system <em>database</em> as a prefix in queries when you refer to objects across databases. Query files, including compressed .gz files, and <a href="/docs/querying-directories">directories</a>, as you would query an SQL table. You can query multiple files in a directory.</p>
 
 <p>Drill extends the SELECT statement for reading complex, multi-structured data. The extended CREATE TABLE AS provides the capability to write data of complex/multi-structured data types. Drill extends the <a href="http://drill.apache.org/docs/lexical-structure">lexical rules</a> for working with files and directories, such as using back ticks for including file names, directory names, and reserved words in queries. Drill syntax supports using the file system as a persistent store for query profiles and diagnostic information.</p>
 
+<h2 id="extension-for-generating-parquet-metadata">Extension for Generating Parquet Metadata</h2>
+
+<p>To speed querying of Parquet files, you can <a href="/docs/querying-parquet-files/">generate metadata</a> in Drill 1.2 and later. Running the following command triggers the generation of metadata files in a directory of Parquet files and its subdirectories:</p>
+
+<p><code>REFRESH TABLE METADATA &lt;path to table&gt;</code></p>
+
+<p>Drill takes advantage of metadata, such as the Hive metadata store, and generates a <a href="/docs/parquet-format/#caching-metadata">metadata cache</a>. Using metadata can improve performance of queries on a large number of files. </p>
+
 <h2 id="extensions-for-hive--and-hbase-related-data-sources">Extensions for Hive- and HBase-related Data Sources</h2>
 
 <p>Drill supports Hive and HBase as a plug-and-play data source. Drill can read tables created in Hive that use <a href="/docs/hive-to-drill-data-type-mapping">data types compatible</a> with Drill.  You can query Hive tables without modifications. You can query self-describing data without requiring metadata definitions in the Hive metastore. Primitives, such as JOIN, support columnar operation. </p>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/72602b14/docs/start-up-options/index.html
----------------------------------------------------------------------
diff --git a/docs/start-up-options/index.html b/docs/start-up-options/index.html
index 16fe5f9..32fb816 100644
--- a/docs/start-up-options/index.html
+++ b/docs/start-up-options/index.html
@@ -1031,6 +1031,8 @@ override.conf</code> file located in Drill’s<code>/conf</code> directory.</p>
 <p>The summary of start-up options, also known as boot options, lists default values. The following descriptions provide more detail on key options that are frequently reconfigured:</p>
 
 <ul>
+<li>drill.exec.http.ssl_enabled<br>
+Available in Drill 1.2. Enables or disables <a href="/docs/configuring-web-console-and-rest-api-security/#https-support">HTTPS support</a>. Settings are TRUE and FALSE, respectively. The default is FALSE.<br></li>
 <li>drill.exec.sys.store.provider.class<br>
 Defines the persistent storage (PStore) provider. The <a href="/docs/persistent-configuration-storage">PStore</a> holds configuration and profile data.<br></li>
 <li>drill.exec.buffer.size<br>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/72602b14/docs/starting-the-web-console/index.html
----------------------------------------------------------------------
diff --git a/docs/starting-the-web-console/index.html b/docs/starting-the-web-console/index.html
index 5fb1266..abc2ec2 100644
--- a/docs/starting-the-web-console/index.html
+++ b/docs/starting-the-web-console/index.html
@@ -987,13 +987,36 @@
 
     <div class="int_text" align="left">
       
-        <p>The Drill Web Console is one of several <a href="/docs/architecture-introduction/#drill-clients">client interfaces</a> you can use to access Drill.  To access the Web Console in Drill 1.2 and later, go to <code>https://&lt;IP address&gt;:8047</code>, where IP address is the host name or IP address of one of the installed Drillbits in a distributed system. In Drill 1.1 and earlier, go to <code>http://&lt;IP address&gt;:8047</code> to access the Web Console.</p>
+        <p>The Drill Web Console is one of several <a href="/docs/architecture-introduction/#drill-clients">client interfaces</a> you can use to access Drill. </p>
+
+<h2 id="drill-1.1-and-earlier">Drill 1.1 and Earlier</h2>
+
+<p>In Drill 1.1 and earlier, to open the Drill Web Console, launch a web browser, and go to the following URL:</p>
+
+<p><code>http://&lt;IP address or host name&gt;:8047</code> </p>
+
+<p>where IP address is the host name or IP address of one of the installed Drillbits in a distributed system or <code>localhost</code> in an embedded system.</p>
+
+<h2 id="drill-1.2-and-later">Drill 1.2 and Later</h2>
+
+<p>In Drill 1.2 and later, to open the Drill Web Console, launch a web browser, and go to one of the following URLs depending on the configuration of HTTPS support:</p>
+
+<ul>
+<li><code>http://&lt;IP address or host name&gt;:8047</code><br>
+Use this URL when <a href="/docs/configuring-web-console-and-rest-api-security/#https-support">HTTPS support</a> is disabled (the default).</li>
+<li><code>https://&lt;IP address or host name&gt;:8047</code><br>
+Use this URL when HTTPS support is enabled.</li>
+</ul>
+
+<p>If HTTPS support is enabled, you need <a href="/docs/configuring-web-console-and-rest-api-security/">authorization</a> to see and use the Storage tab of the Web Console. </p>
 
 <p>If <a href="/docs/configuring-user-authentication/">user authentication</a> is not enabled, the Web Console controls appear: </p>
 
 <p><img src="/docs/img/web-ui.png" alt="Web Console"></p>
 
-<p>If <a href="/docs/configuring-user-authentication/">user authentication</a> is enabled, Drill 1.2 and later prompts you for a user name and password:</p>
+<p>Select the Storage tab to view, update, or add a new <a href="/docs/plugin-configuration-basics/">storage plugin configuration</a>.</p>
+
+<p>If <a href="/docs/configuring-user-authentication/">user authentication</a> is enabled, Drill prompts you for a user name and password:</p>
 
 <p><img src="/docs/img/web-ui-login.png" alt="Web Console Login"></p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/72602b14/docs/storage-plugin-registration/index.html
----------------------------------------------------------------------
diff --git a/docs/storage-plugin-registration/index.html b/docs/storage-plugin-registration/index.html
index c56bc27..da8de6c 100644
--- a/docs/storage-plugin-registration/index.html
+++ b/docs/storage-plugin-registration/index.html
@@ -987,7 +987,7 @@
 
     <div class="int_text" align="left">
       
-        <p>You connect Drill to a file system, Hive, HBase, or other data source through a storage plugin. On the Storage tab of the Drill Web Console, you can view and reconfigure a storage plugin if you are <a href="/docs/configuring-web-console-and-rest-api-security/">authorized</a> to do so. Go to <code>https://&lt;IP address&gt;:8047/storage</code>, where IP address is the host name or IP address of one of the installed Drillbits in a distributed system or <code>localhost</code> in an embedded system. In Drill 1.1 and earlier, go to <code>http://&lt;IP address&gt;:8047/storage</code> to view and configure a storage plugin.</p>
+        <p>You connect Drill to a file system, Hive, HBase, or other data source through a storage plugin. On the Storage tab of the Drill Web Console, you can view and reconfigure a storage plugin. If <a href="/docs/configuring-web-console-and-rest-api-security/#https-support">HTTPS support</a> is not enabled (the default), go to <code>http://&lt;IP address&gt;:8047/storage</code> to view and configure a storage plugin. IP address is the host name or IP address of one of the installed Drillbits in a distributed system or <code>localhost</code> in an embedded system. If HTTPS support is enabled and you are <a href="/docs/configuring-web-console-and-rest-api-security/">authorized</a> to view and configure a storage plugin, go to <code>https://&lt;IP address&gt;:8047/storage</code>. </p>
 
 <p>The Drill installation registers the <code>cp</code>, <code>dfs</code>, <code>hbase</code>, <code>hive</code>, and <code>mongo</code> default storage plugin configurations.</p>
 

http://git-wip-us.apache.org/repos/asf/drill-site/blob/72602b14/docs/supported-data-types/index.html
----------------------------------------------------------------------
diff --git a/docs/supported-data-types/index.html b/docs/supported-data-types/index.html
index 4b2c3eb..0f24a63 100644
--- a/docs/supported-data-types/index.html
+++ b/docs/supported-data-types/index.html
@@ -1130,7 +1130,7 @@ You do not assign a data type to every column name in a CREATE TABLE statement t
 <ul>
 <li><a href="/docs/data-type-conversion#cast">CAST</a><br></li>
 <li><a href="/docs/data-type-conversion#convert_to-and-convert_from">CONVERT TO/FROM</a><br>
-Use the <a href="/docs/supported-data-types/#convert_to-and-convert_from-data-types">CONVERT TO AND CONVERT FROM data types</a><br></li>
+Use the <a href="/docs/supported-data-types/#data-types-for-convert_to-and-convert_from-functions">CONVERT TO AND CONVERT FROM data types</a><br></li>
 <li>Other <a href="/docs/data-type-conversion#other-data-type-conversions">data conversion functions</a><br></li>
 </ul>
 
@@ -1246,7 +1246,7 @@ Implicitly casts all textual data to VARCHAR.</li>
 <li><a href="/docs/data-type-conversion#cast">CAST</a><br>
 Casts data from one data type to another.</li>
 <li>CONVERT_TO and CONVERT_FROM functions
-Converts data, including binary data, from one data type to another using <a href="/docs/supported-data-types/#convert_to-and-convert_from-data-types">&quot;CONVERT_TO and CONVERT_FROM data types&quot;</a><br></li>
+Converts data, including binary data, from one data type to another using <a href="/docs/supported-data-types/#data-types-for-convert_to-and-convert_from-functions">&quot;CONVERT_TO and CONVERT_FROM data types&quot;</a><br></li>
 <li><a href="/docs/data-type-conversion/#to_char">TO_CHAR</a><br>
 Converts a TIMESTAMP, INTERVALDAY/INTERVALYEAR, INTEGER, DOUBLE, or DECIMAL to a string.</li>
 <li><a href="/docs/data-type-conversion/#to_date">TO_DATE</a><br>
@@ -1520,7 +1520,7 @@ Converts a string to TIMESTAMP.</li>
 
 <p>* Used to cast binary UTF-8 data coming to/from sources such as HBase. The CAST function does not support all representations of FIXEDBINARY and VARBINARY. Only the UTF-8 format is supported. </p>
 
-<h2 id="convert_to-and-convert_from-data-types">CONVERT_TO and CONVERT_FROM Data Types</h2>
+<h2 id="data-types-for-convert_to-and-convert_from-functions">Data Types for CONVERT_TO and CONVERT_FROM Functions</h2>
 
 <p>The <a href="/docs/data-type-conversion/#convert_to-and-convert_from">CONVERT_TO function</a> converts data to bytes from the input type. The <a href="/docs/data-type-conversion/#convert_to-and-convert_from">CONVERT_FROM function</a> converts data from bytes to the input type. For example, the following CONVERT_TO function converts an integer to bytes using big endian encoding:</p>
 <div class="highlight"><pre><code class="language-text" data-lang="text">CONVERT_TO(mycolumn, &#39;INT_BE&#39;)
@@ -1631,6 +1631,11 @@ and CONVERT_FROM functions:</p>
 <td>DATE/TIME</td>
 </tr>
 <tr>
+<td>TIMESTAMP_IMPALA*</td>
+<td>bytes(8)</td>
+<td>INT96</td>
+</tr>
+<tr>
 <td>UTF8</td>
 <td>bytes</td>
 <td>VARCHAR</td>
@@ -1652,6 +1657,8 @@ and CONVERT_FROM functions:</p>
 </tr>
 </tbody></table>
 
+<p>* In Drill 1.2 and later, use the TIMESTAMP_IMPALA type with the CONVERT_FROM function to decode a timestamp from Hive or Impala, as shown in the section, <a href="/drill/docs/parquet-format/#about-int96-support">&quot;About INT96 Support&quot;</a>.</p>
+
 <p>This table includes types such as INT, for converting little endian-encoded data and types such as INT_BE for converting big endian-encoded data to Drill internal types. You need to convert binary representations, such as data in HBase, to a Drill internal format as you query the data. If you are unsure that the size of the source and destination INT or BIGINT you are converting is the same, use CAST to convert these data types to/from binary.  </p>
 
 <p>*_HADOOPV in the data type name denotes the variable length integer as defined by Hadoop libraries. Use a *_HADOOPV type if user data is encoded in this format by a Hadoop tool outside MapR.</p>

http://git-wip-us.apache.org/repos/asf/drill-site/blob/72602b14/feed.xml
----------------------------------------------------------------------
diff --git a/feed.xml b/feed.xml
index 39e5a9d..b5c752a 100644
--- a/feed.xml
+++ b/feed.xml
@@ -6,8 +6,8 @@
 </description>
     <link>/</link>
     <atom:link href="/feed.xml" rel="self" type="application/rss+xml"/>
-    <pubDate>Wed, 23 Sep 2015 14:19:37 -0700</pubDate>
-    <lastBuildDate>Wed, 23 Sep 2015 14:19:37 -0700</lastBuildDate>
+    <pubDate>Wed, 30 Sep 2015 18:06:05 -0700</pubDate>
+    <lastBuildDate>Wed, 30 Sep 2015 18:06:05 -0700</lastBuildDate>
     <generator>Jekyll v2.5.2</generator>
     
       <item>


Mime
View raw message