hawq-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From yo...@apache.org
Subject [17/31] incubator-hawq-site git commit: rebuilt userguide html with latest changes from source release/2.1.0.0-incubating branch
Date Wed, 22 Feb 2017 22:15:12 GMT
http://git-wip-us.apache.org/repos/asf/incubator-hawq-site/blob/1b0cdd8e/docs/userguide/2.1.0.0-incubating/pxf/PXFExternalTableandAPIReference.html
----------------------------------------------------------------------
diff --git a/docs/userguide/2.1.0.0-incubating/pxf/PXFExternalTableandAPIReference.html b/docs/userguide/2.1.0.0-incubating/pxf/PXFExternalTableandAPIReference.html
index 5e48a91..5f254e1 100644
--- a/docs/userguide/2.1.0.0-incubating/pxf/PXFExternalTableandAPIReference.html
+++ b/docs/userguide/2.1.0.0-incubating/pxf/PXFExternalTableandAPIReference.html
@@ -170,6 +170,9 @@
           <li>
             <a href="/docs/userguide/2.1.0.0-incubating/admin/monitor.html">Monitoring a HAWQ System</a>
           </li>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/admin/logfiles.html">HAWQ Administrative Log Files</a>
+          </li>
         </ul>
       </li>
       <li class="has_submenu">
@@ -443,6 +446,7 @@
       </li>
       <li class="has_submenu"><a href="/docs/userguide/2.1.0.0-incubating/bestpractices/HAWQBestPracticesOverview.html">Best Practices</a>
         <ul>
+          <li><a href="/docs/userguide/2.1.0.0-incubating/bestpractices/config_hawq_bestpractices.html">Configuring HAWQ</a></li>
           <li><a href="/docs/userguide/2.1.0.0-incubating/bestpractices/operating_hawq_bestpractices.html">Operating HAWQ</a></li>
           <li><a href="/docs/userguide/2.1.0.0-incubating/bestpractices/secure_bestpractices.html">Securing HAWQ</a></li>
           <li><a href="/docs/userguide/2.1.0.0-incubating/bestpractices/managing_resources_bestpractices.html">Managing Resources</a></li>
@@ -565,11 +569,17 @@
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_filerep_tcp_keepalives_interval">gp_filerep_tcp_keepalives_interval</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_hashjoin_tuples_per_bucket">gp_hashjoin_tuples_per_bucket</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_idf_deduplicate">gp_idf_deduplicate</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_cache_future_packets">gp_interconnect_cache_future_packets</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_default_rtt">gp_interconnect_default_rtt</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_fc_method">gp_interconnect_fc_method</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_hash_multiplier">gp_interconnect_hash_multiplier</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_min_retries_before_timeout">gp_interconnect_min_retries_before_timeout</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_min_rto">gp_interconnect_min_rto</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_queue_depth">gp_interconnect_queue_depth</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_setup_timeout">gp_interconnect_setup_timeout</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_snd_queue_depth">gp_interconnect_snd_queue_depth</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_timer_checking_period">gp_interconnect_timer_checking_period</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_timer_period">gp_interconnect_timer_period</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_type">gp_interconnect_type</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_log_format">gp_log_format</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_max_csv_line_length">gp_max_csv_line_length</a></li>
@@ -969,7 +979,6 @@
 <li><a href="#fragmenter">Fragmenter</a></li>
 <li><a href="#accessor">Accessor</a></li>
 <li><a href="#resolver">Resolver</a></li>
-<li><a href="#analyzer">Analyzer</a></li>
 </ul>
 </li>
 <li><a href="#aboutcustomprofiles">About Custom Profiles</a></li>
@@ -993,26 +1002,45 @@
         <div class="to-top" id="js-to-top">
           <a href="#top" title="back to top"></a>
         </div>
-        <p>You can use the PXF API to create your own connectors to access any other type of parallel data store or processing engine.</p>
+        <!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
 
-<p>The PXF Java API lets you extend PXF functionality and add new services and formats without changing HAWQ. The API includes three classes that are extended to allow HAWQ to access an external data source: Fragmenter, Accessor, and Resolver.</p>
+<p>You can use the PXF API to create your own connectors to access any other type of parallel data store or processing engine.</p>
 
-<p>The Fragmenter produces a list of data fragments that can be read in parallel from the data source. The Accessor produces a list of records from a single fragment, and the Resolver both deserializes and serializes records.</p>
+<p>The PXF Java API lets you extend PXF functionality and add new services and formats without changing HAWQ. The API includes three classes that are extended to allow HAWQ to access an external data source: <code>Fragmenter</code>, <code>Accessor</code>, and <code>Resolver</code>.</p>
 
-<p>Together, the Fragmenter, Accessor, and Resolver classes implement a connector. PXF includes plug-ins for tables in HDFS, HBase, and Hive.</p>
+<p>The <code>Fragmenter</code> produces a list of data fragments that can be read in parallel from the data source. The <code>Accessor</code> produces a list of records from a single fragment, and the <code>Resolver</code> both deserializes and serializes records.</p>
+
+<p>Together, the <code>Fragmenter</code>, <code>Accessor</code>, and <code>Resolver</code> classes implement a connector. PXF includes plug-ins for HDFS and JSON files and tables in HBase and Hive.</p>
 
 <h2><a id="creatinganexternaltable"></a>Creating an External Table</h2>
 
-<p>The syntax for a readable <code>EXTERNAL TABLE</code> that uses the PXF protocol is as follows:</p>
-<pre class="highlight sql"><code><span class="k">CREATE</span> <span class="p">[</span><span class="n">READABLE</span><span class="o">|</span><span class="n">WRITABLE</span><span class="p">]</span> <span class="k">EXTERNAL</span> <span class="k">TABLE</span> <span class="k">table_name</span>
-        <span class="p">(</span> <span class="k">column_name</span> <span class="n">data_type</span> <span class="p">[,</span> <span class="p">...]</span> <span class="o">|</span> <span class="k">LIKE</span> <span class="n">other_table</span> <span class="p">)</span>
-<span class="k">LOCATION</span><span class="p">(</span><span class="s1">'pxf://host[:port]/path-to-data&lt;pxf parameters&gt;[&amp;custom-option=value...]'</span><span class="p">)</span>
+<p>The syntax for an <code>EXTERNAL TABLE</code> that uses the PXF protocol is as follows:</p>
+<pre class="highlight sql"><code><span class="k">CREATE</span> <span class="p">[</span><span class="n">READABLE</span><span class="o">|</span><span class="n">WRITABLE</span><span class="p">]</span> <span class="k">EXTERNAL</span> <span class="k">TABLE</span> <span class="o">&lt;</span><span class="k">table_name</span><span class="o">&gt;</span>
+        <span class="p">(</span> <span class="o">&lt;</span><span class="k">column_name</span><span class="o">&gt;</span> <span class="o">&lt;</span><span class="n">data_type</span><span class="o">&gt;</span> <span class="p">[,</span> <span class="p">...]</span> <span class="o">|</span> <span class="k">LIKE</span> <span class="o">&lt;</span><span class="n">other_table</span><span class="o">&gt;</span> <span class="p">)</span>
+<span class="k">LOCATION</span><span class="p">(</span><span class="s1">'pxf://&lt;host&gt;[:&lt;port&gt;]/&lt;path-to-data&gt;?&lt;pxf-parameters&gt;[&amp;&lt;custom-option&gt;=&lt;value&gt;[...]]'</span><span class="p">)</span>
 <span class="n">FORMAT</span> <span class="s1">'custom'</span> <span class="p">(</span><span class="n">formatter</span><span class="o">=</span><span class="s1">'pxfwritable_import|pxfwritable_export'</span><span class="p">);</span>
 </code></pre>
 
-<p> where <em>&lt;pxf parameters&gt;</em> is:</p>
-<pre class="highlight plaintext"><code>   ?FRAGMENTER=fragmenter_class&amp;ACCESSOR=accessor_class&amp;RESOLVER=resolver_class]
- | ?PROFILE=profile-name
+<p> where &lt;pxf-parameters&gt; is:</p>
+<pre class="highlight plaintext"><code>    [FRAGMENTER=&lt;fragmenter_class&gt;&amp;ACCESSOR=&lt;accessor_class&gt;
+         &amp;RESOLVER=&lt;resolver_class&gt;] | ?PROFILE=profile-name
 </code></pre>
 
 <p><caption><span class="tablecap">Table 1. Parameter values and description</span></caption></p>
@@ -1027,47 +1055,63 @@
 </thead><tbody>
 <tr>
 <td>host</td>
-<td>The current host of the PXF service.</td>
+<td>The HDFS NameNode.</td>
 </tr>
 <tr>
 <td>port </td>
-<td>Connection port for the PXF service. If the port is omitted, PXF assumes that High Availability (HA) is enabled and connects to the HA name service port, 51200 by default. The HA name service port can be changed by setting the <code>pxf_service_port</code> configuration parameter.</td>
+<td>Connection port for the PXF service. If the port is omitted, PXF assumes that High Availability (HA) is enabled and connects to the HA name service port, 51200, by default. The HA name service port can be changed by setting the <code>pxf_service_port</code> configuration parameter.</td>
 </tr>
 <tr>
-<td><em>path_to_data</em></td>
+<td>&lt;path-to-data&gt;</td>
 <td>A directory, file name, wildcard pattern, table name, etc.</td>
 </tr>
 <tr>
+<td>PROFILE</td>
+<td>The profile PXF uses to access the data. PXF supports multiple plug-ins that currently expose profiles named <code>HBase</code>, <code>Hive</code>, <code>HiveRC</code>, <code>HiveText</code>, <code>HiveORC</code>,  <code>HdfsTextSimple</code>, <code>HdfsTextMulti</code>, <code>Avro</code>, <code>SequenceWritable</code>, and <code>Json</code>.</td>
+</tr>
+<tr>
 <td>FRAGMENTER</td>
-<td>The plug-in (Java class) to use for fragmenting data. Used for READABLE external tables only.</td>
+<td>The Java class the plug-in uses for fragmenting data. Used for READABLE external tables only.</td>
 </tr>
 <tr>
 <td>ACCESSOR</td>
-<td>The plug-in (Java class) to use for accessing the data. Used for READABLE and WRITABLE tables.</td>
+<td>The Java class the plug-in uses for accessing the data. Used for READABLE and WRITABLE tables.</td>
 </tr>
 <tr>
 <td>RESOLVER</td>
-<td>The plug-in (Java class) to use for serializing and deserializing the data. Used for READABLE and WRITABLE tables.</td>
+<td>The Java class the plug-in uses for serializing and deserializing the data. Used for READABLE and WRITABLE tables.</td>
 </tr>
 <tr>
-<td><em>custom-option</em>=<em>value</em></td>
-<td>Additional values to pass to the plug-in class. The parameters are passed at runtime to the plug-ins indicated above. The plug-ins can lookup custom options with <code>org.apache.hawq.pxf.api.utilities.InputData</code>. </td>
+<td>&lt;custom-option&gt;</td>
+<td>Additional values to pass to the plug-in at runtime. A plug-in can parse custom options with the PXF helper class  <code>org.apache.hawq.pxf.api.utilities.InputData</code>. </td>
 </tr>
 </tbody></table>
 
 <p><strong>Note:</strong> When creating PXF external tables, you cannot use the <code>HEADER</code> option in your <code>FORMAT</code> specification.</p>
 
-<p>For more information about this example, see <a href="#aboutthejavaclassservicesandformats">About the Java Class Services and Formats</a>.</p>
-
 <h2><a id="aboutthejavaclassservicesandformats"></a>About the Java Class Services and Formats</h2>
 
-<p>The <code>LOCATION</code> string in a PXF <code>CREATE EXTERNAL TABLE</code> statement is a URI that specifies the host and port of an external data source and the path to the data in the external data source. The query portion of the URI, introduced by the question mark (?), must include the required parameters <code>FRAGMENTER</code> (readable tables only), <code>ACCESSOR</code>, and <code>RESOLVER</code>, which specify Java class names that extend the base PXF API plug-in classes. Alternatively, the required parameters can be replaced with a <code>PROFILE</code> parameter with the name of a profile defined in the <code>/etc/conf/pxf-profiles.xml</code> that defines the required classes.</p>
+<p>The <code>LOCATION</code> string in a PXF <code>CREATE EXTERNAL TABLE</code> statement is a URI that specifies the host and port of an external data source and the path to the data in the external data source. The query portion of the URI, introduced by the question mark (?), must include the PXF profile name or the plug-in&rsquo;s  <code>FRAGMENTER</code> (readable tables only), <code>ACCESSOR</code>, and <code>RESOLVER</code> class names.</p>
+
+<p>PXF profiles are defined in the <code>/etc/pxf/conf/pxf-profiles.xml</code> file. Profile definitions include plug-in class names. For example, the <code>HdfsTextSimple</code> profile definition is:</p>
+<pre class="highlight xml"><code><span class="nt">&lt;profile&gt;</span>
+    <span class="nt">&lt;name&gt;</span>HdfsTextSimple<span class="nt">&lt;/name&gt;</span>
+    <span class="nt">&lt;description&gt;</span> This profile is suitable for use when reading delimited
+      single line records from plain text files on HDFS.
+    <span class="nt">&lt;/description&gt;</span>
+    <span class="nt">&lt;plugins&gt;</span>
+        <span class="nt">&lt;fragmenter&gt;</span>org.apache.hawq.pxf.plugins.hdfs.HdfsDataFragmenter<span class="nt">&lt;/fragmenter&gt;</span>
+        <span class="nt">&lt;accessor&gt;</span>org.apache.hawq.pxf.plugins.hdfs.LineBreakAccessor<span class="nt">&lt;/accessor&gt;</span>
+        <span class="nt">&lt;resolver&gt;</span>org.apache.hawq.pxf.plugins.hdfs.StringPassResolver<span class="nt">&lt;/resolver&gt;</span>
+    <span class="nt">&lt;/plugins&gt;</span>
+<span class="nt">&lt;/profile&gt;</span>
+</code></pre>
 
-<p>The parameters in the PXF URI are passed from HAWQ as headers to the PXF Java service. You can pass custom information to user-implemented PXF plug-ins by adding optional parameters to the LOCATION string.</p>
+<p>The parameters in the PXF URI are passed from HAWQ as headers to the PXF Java service. You can pass custom information to user-implemented PXF plug-ins by adding optional parameters to the <code>LOCATION</code> string.</p>
 
 <p>The Java PXF service retrieves the source data from the external data source and converts it to a HAWQ-readable table format.</p>
 
-<p>The Accessor, Resolver, and Fragmenter Java classes extend the <code>org.apache.hawq.pxf.api.utilities.Plugin</code> class:</p>
+<p>The <code>Accessor</code>, <code>Resolver</code>, and <code>Fragmenter</code> Java classes extend the <code>org.apache.hawq.pxf.api.utilities.Plugin</code> class:</p>
 <pre class="highlight java"><code><span class="kn">package</span> <span class="n">org</span><span class="o">.</span><span class="na">apache</span><span class="o">.</span><span class="na">hawq</span><span class="o">.</span><span class="na">pxf</span><span class="o">.</span><span class="na">api</span><span class="o">.</span><span class="na">utilities</span><span class="o">;</span>
 <span class="cm">/**
  * Base class for all plug-in types (Accessor, Resolver, Fragmenter, ...).
@@ -1094,7 +1138,7 @@
 <span class="o">}</span>
 </code></pre>
 
-<p>The parameters in the <code>LOCATION</code> string are available to the plug-ins through methods in the <code>org.apache.hawq.pxf.api.utilities.InputData</code> class. Custom parameters added to the location string can be looked up with the <code>getUserProperty()</code> method.</p>
+<p>The parameters in the <code>LOCATION</code> string are available to the plug-ins through methods in the <code>org.apache.hawq.pxf.api.utilities.InputData</code> class. Plug-ins can look up the custom parameters added to the location string with the <code>getUserProperty()</code> method.</p>
 <pre class="highlight java"><code><span class="cm">/**
  * Common configuration available to all PXF plug-ins. Represents input data
  * coming from client applications, such as HAWQ.
@@ -1223,27 +1267,36 @@
 
 <h3><a id="fragmenter"></a>Fragmenter</h3>
 
-<p><strong>Note:</strong> The Fragmenter Plugin reads data into HAWQ readable external tables. The Fragmenter Plugin cannot write data out of HAWQ into writable external tables.</p>
+<p><strong>Note:</strong> You use the <code>Fragmenter</code> class to read data into HAWQ. You cannot use this class to write data out of HAWQ.</p>
 
-<p>The Fragmenter is responsible for passing datasource metadata back to HAWQ. It also returns a list of data fragments to the Accessor or Resolver. Each data fragment describes some part of the requested data set. It contains the datasource name, such as the file or table name, including the hostname where it is located. For example, if the source is a HDFS file, the Fragmenter returns a list of data fragments containing a HDFS file block. Each fragment includes the location of the block. If the source data is an HBase table, the Fragmenter returns information about table regions, including their locations.</p>
+<p>The <code>Fragmenter</code> is responsible for passing datasource metadata back to HAWQ. It also returns a list of data fragments to the <code>Accessor</code> or <code>Resolver</code>. Each data fragment describes some part of the requested data set. It contains the datasource name, such as the file or table name, including the hostname where it is located. For example, if the source is an HDFS file, the <code>Fragmenter</code> returns a list of data fragments containing an HDFS file block. Each fragment includes the location of the block. If the source data is an HBase table, the <code>Fragmenter</code> returns information about table regions, including their locations.</p>
 
 <p>The <code>ANALYZE</code> command now retrieves advanced statistics for PXF readable tables by estimating the number of tuples in a table, creating a sample table from the external table, and running advanced statistics queries on the sample table in the same way statistics are collected for native HAWQ tables.</p>
 
 <p>The configuration parameter <code>pxf_enable_stat_collection</code> controls collection of advanced statistics. If <code>pxf_enable_stat_collection</code> is set to false, no analysis is performed on PXF tables. An additional parameter, <code>pxf_stat_max_fragments</code>, controls the number of fragments sampled to build a sample table. By default <code>pxf_stat_max_fragments</code> is set to 100, which means that even if there are more than 100 fragments, only this number of fragments will be used in <code>ANALYZE</code> to sample the data. Increasing this number will result in better sampling, but can also impact performance.</p>
 
-<p>When a PXF table is analyzed and <code>pxf_enable_stat_collection</code> is set to off, or an error occurs because the table is not defined correctly, the PXF service is down, or <code>getFragmentsStats</code> is not implemented, a warning message is shown and no statistics are gathered for that table. If <code>ANALYZE</code> is running over all tables in the database, the next table will be processed – a failure processing one table does not stop the command.</p>
+<p>When a PXF table is analyzed, any of the following conditions might result in a warning message with no statistics gathered for the table:</p>
 
-<p>For a detailed explanation about HAWQ statistical data gathering, see <code>ANALYZE</code> in the SQL Commands Reference.</p>
+<ul>
+<li><code>pxf_enable_stat_collection</code> is set to off,</li>
+<li>an error occurs because the table is not defined correctly,</li>
+<li>the PXF service is down, or</li>
+<li><code>getFragmentsStats()</code> is not implemented </li>
+</ul>
+
+<p>If <code>ANALYZE</code> is running over all tables in the database, the next table will be processed – a failure processing one table does not stop the command.</p>
+
+<p>For a detailed explanation about HAWQ statistical data gathering, refer to the <a href="/docs/userguide/2.1.0.0-incubating/reference/sql/ANALYZE.html"><code>ANALYZE</code></a> SQL command reference.</p>
 
 <p><strong>Note:</strong></p>
 
 <ul>
-<li>  Depending on external table size, the time required to complete an ANALYZE operation can be lengthy. The boolean parameter <code>pxf_enable_stat_collection</code> enables statistics collection for PXF. The default value is <code>on</code>. Turning this parameter off (disabling PXF statistics collection) can help decrease the time needed for the ANALYZE operation.</li>
-<li>  You can also use <em>pxf_stat_max_fragments</em> to limit the number of fragments to be sampled by decreasing it from the default (100). However, if the number is too low, the sample might not be uniform and the statistics might be skewed.</li>
-<li>  You can also implement getFragmentsStats to return an error. This will cause ANALYZE on a table with this Fragmenter to fail immediately, and default statistics values will be used for that table.</li>
+<li>  Depending on external table size, the time required to complete an <code>ANALYZE</code> operation can be lengthy. The boolean parameter <code>pxf_enable_stat_collection</code> enables statistics collection for PXF. The default value is <code>on</code>. Turning this parameter off (disabling PXF statistics collection) can help decrease the time needed for the <code>ANALYZE</code> operation.</li>
+<li>  You can also use <code>pxf_stat_max_fragments</code> to limit the number of fragments to be sampled by decreasing it from the default (100). However, if the number is too low, the sample might not be uniform and the statistics might be skewed.</li>
+<li>  You can also implement <code>getFragmentsStats()</code> to return an error. This will cause <code>ANALYZE</code> on a table with this <code>Fragmenter</code> to fail immediately, and default statistics values will be used for that table.</li>
 </ul>
 
-<p>The following table lists the Fragmenter plug-in implementations included with the PXF API.</p>
+<p>The following table lists the <code>Fragmenter</code> plug-in implementations included with the PXF API.</p>
 
 <p><a id="fragmenter__table_cgs_svp_3s"></a></p>
 
@@ -1255,31 +1308,31 @@
 </colgroup>
 <thead>
 <tr class="header">
-<th><p><code class="ph codeph">Fragmenter class</code></p></th>
-<th><p><code class="ph codeph">Description</code></p></th>
+<th><p>Fragmenter class</p></th>
+<th><p>Description</p></th>
 </tr>
 </thead>
 <tbody>
 <tr class="odd">
 <td>org.apache.hawq.pxf.plugins.hdfs.HdfsDataFragmenter</td>
-<td>Fragmenter for Hdfs files</td>
+<td>Fragmenter for HDFS, JSON files</td>
 </tr>
 <tr class="even">
-<td>org.apache.hawq.pxf.plugins.hbase.HBaseAtomicDataAccessor</td>
+<td>org.apache.hawq.pxf.plugins.hbase.HBaseDataFragmenter</td>
 <td>Fragmenter for HBase tables</td>
 </tr>
 <tr class="odd">
-<td>org.apache.hawq.pxf.plugins.hive.HiveDataFragmenter</td>
+<td>org.apache.hawq.pxf.plugins.hive.HiveDataFragmenter</li>
 <td>Fragmenter for Hive tables </td>
 </tr>
 <tr class="even">
 <td>org.apache.hawq.pxf.plugins.hdfs.HiveInputFormatFragmenter</td>
-<td>Fragmenter for Hive tables with RC or text files </td>
+<td>Fragmenter for Hive tables with RC, ORC, or text file formats </td>
 </tr>
 </tbody>
 </table>
 
-<p>A Fragmenter class extends <code>org.apache.hawq.pxf.api.Fragmenter</code>:</p>
+<p>A <code>Fragmenter</code> class extends <code>org.apache.hawq.pxf.api.Fragmenter</code>:</p>
 
 <h4><a id="com.pivotal.pxf.api.fragmenter"></a>org.apache.hawq.pxf.api.Fragmenter</h4>
 <pre class="highlight java"><code><span class="kn">package</span> <span class="n">org</span><span class="o">.</span><span class="na">apache</span><span class="o">.</span><span class="na">hawq</span><span class="o">.</span><span class="na">pxf</span><span class="o">.</span><span class="na">api</span><span class="o">;</span>
@@ -1330,7 +1383,7 @@
 
 <h4><a id="classdescription"></a>Class Description</h4>
 
-<p>The Fragmenter.getFragments() method returns a List&lt;Fragment&gt;;:</p>
+<p>The <code>Fragmenter.getFragments()</code> method returns a <code>List&lt;Fragment&gt;</code>:</p>
 <pre class="highlight java"><code><span class="kn">package</span> <span class="n">org</span><span class="o">.</span><span class="na">apache</span><span class="o">.</span><span class="na">hawq</span><span class="o">.</span><span class="na">pxf</span><span class="o">.</span><span class="na">api</span><span class="o">;</span>
 <span class="cm">/*
  * Fragment holds a data fragment' information.
@@ -1403,7 +1456,7 @@
 
 <h3><a id="accessor"></a>Accessor</h3>
 
-<p>The Accessor retrieves specific fragments and passes records back to the Resolver. For example, the HDFS plug-ins create a <code>org.apache.hadoop.mapred.FileInputFormat</code> and a <code>org.apache.hadoop.mapred.RecordReader</code> for an HDFS file and sends this to the Resolver. In the case of HBase or Hive files, the Accessor returns single rows from an HBase or Hive table. PXF 1.x or higher contains the following Accessor implementations:</p>
+<p>The <code>Accessor</code> retrieves specific fragments and passes records back to the Resolver. For example, the HDFS plug-ins create a <code>org.apache.hadoop.mapred.FileInputFormat</code> and a <code>org.apache.hadoop.mapred.RecordReader</code> for an HDFS file and sends this to the <code>Resolver</code>. In the case of HBase or Hive files, the <code>Accessor</code> returns single rows from an HBase or Hive table. PXF includes the following <code>Accessor</code> implementations:</p>
 
 <p><a id="accessor__table_ewm_ttz_4p"></a></p>
 
@@ -1415,8 +1468,8 @@
 </colgroup>
 <thead>
 <tr class="header">
-<th><p><code class="ph codeph">Accessor class</code></p></th>
-<th><p><code class="ph codeph">Description</code></p></th>
+<th><p>Accessor class</p></th>
+<th><p>Description</p></th>
 </tr>
 </thead>
 <tbody>
@@ -1454,16 +1507,26 @@
 </tr>
 <tr class="odd">
 <td>org.apache.hawq.pxf.plugins.hive.HiveLineBreakAccessor</td>
-<td>Accessor for Hive tables with text files</td>
+<td>Accessor for Hive tables stored as text file format</td>
 </tr>
 <tr class="even">
 <td>org.apache.hawq.pxf.plugins.hive.HiveRCFileAccessor</td>
-<td>Accessor for Hive tables with RC files</td>
+<td>Accessor for Hive tables stored as RC file format</td>
+</tr>
+</tr>
+<tr class="odd">
+<td>org.apache.hawq.pxf.plugins.hive.HiveORCAccessor</td>
+<td>Accessor for Hive tables stored as ORC format </td>
+</tr>
+</tr>
+<tr class="odd">
+<td>org.apache.hawq.pxf.plugins.json.JsonAccessor</td>
+<td>Accessor for JSON files</td>
 </tr>
 </tbody>
 </table>
 
-<p>The class must extend the <code>org.apache.hawq.pxf.Plugin</code>  class, and implement one or both interfaces:</p>
+<p>The class must extend the <code>org.apache.hawq.pxf.Plugin</code>  class, and implement one or both of the interfaces:</p>
 
 <ul>
 <li>  <code>org.apache.hawq.pxf.api.ReadAccessor</code></li>
@@ -1496,14 +1559,14 @@
 <span class="o">}</span>
 </code></pre>
 
-<p>The Accessor calls <code>openForRead()</code> to read existing data. After reading the data, it calls <code>closeForRead()</code>. <code>readNextObject()</code> returns one of the following:</p>
+<p>The <code>Accessor</code> calls <code>openForRead()</code> to read existing data. After reading the data, it calls <code>closeForRead()</code>. <code>readNextObject()</code> returns one of the following:</p>
 
 <ul>
-<li>  a single record, encapsulated in a OneRow object</li>
+<li>  a single record, encapsulated in a <code>OneRow</code> object</li>
 <li>  null if it reaches <code>EOF</code></li>
 </ul>
 
-<p>The Accessor calls <code>openForWrite()</code> to write data out. After writing the data, it writes a <code>OneRow</code> object with <code>writeNextObject()</code>, and when done calls <code>closeForWrite()</code>. <code>OneRow</code> represents a key-value item.</p>
+<p>The <code>Accessor</code> calls <code>openForWrite()</code> to write data out. After writing the data, it writes a <code>OneRow</code> object with <code>writeNextObject()</code>, and when done calls <code>closeForWrite()</code>. <code>OneRow</code> represents a key-value item.</p>
 
 <h4><a id="com.pivotal.pxf.api.onerow"></a>org.apache.hawq.pxf.api.OneRow</h4>
 <pre class="highlight java"><code><span class="kn">package</span> <span class="n">org</span><span class="o">.</span><span class="na">apache</span><span class="o">.</span><span class="na">hawq</span><span class="o">.</span><span class="na">pxf</span><span class="o">.</span><span class="na">api</span><span class="o">;</span>
@@ -1553,7 +1616,7 @@
 
 <h3><a id="resolver"></a>Resolver</h3>
 
-<p>The Resolver deserializes records in the <code>OneRow</code> format and serializes them to a list of <code>OneField</code> objects. PXF converts a <code>OneField</code> object to a HAWQ-readable <code>GPDBWritable</code> format. PXF 1.x or higher contains the following implementations:</p>
+<p>The <code>Resolver</code> deserializes records in the <code>OneRow</code> format and serializes them to a list of <code>OneField</code> objects. PXF converts a <code>OneField</code> object to a HAWQ-readable <code>GPDBWritable</code> format. PXF 1.x or higher contains the following implementations:</p>
 
 <p><a id="resolver__table_nbd_d5z_4p"></a></p>
 
@@ -1565,18 +1628,18 @@
 </colgroup>
 <thead>
 <tr class="header">
-<th><p><code class="ph codeph">Resolver class</code></p></th>
-<th><p><code class="ph codeph">Description</code></p></th>
+<th><p>Resolver class</p></th>
+<th><p>Description</p></th>
 </tr>
 </thead>
 <tbody>
 <tr class="odd">
-<td><p><code class="ph codeph">org.apache.hawq.pxf.plugins.hdfs.StringPassResolver</code></p></td>
+<td><p>org.apache.hawq.pxf.plugins.hdfs.StringPassResolver</p></td>
 <td><p><code class="ph codeph">StringPassResolver</code> replaced the deprecated <code class="ph codeph">TextResolver</code>. It passes whole records (composed of any data types) as strings without parsing them</p></td>
 </tr>
 <tr class="even">
-<td><p><code class="ph codeph">org.apache.hawq.pxf.plugins.hdfs.WritableResolver</code></p></td>
-<td><p>Resolver for custom Hadoop Writable implementations. Custom class can be specified with the schema in DATA-SCHEMA. Supports the following types:</p>
+<td><p>org.apache.hawq.pxf.plugins.hdfs.WritableResolver</p></td>
+<td><p>Resolver for custom Hadoop Writable implementations. Custom class can be specified with the schema in `DATA-SCHEMA`. Supports the following types:</p>
 <pre class="pre codeblock"><code>DataType.BOOLEAN
 DataType.INTEGER
 DataType.BIGINT
@@ -1586,11 +1649,11 @@ DataType.VARCHAR
 DataType.BYTEA</code></pre></td>
 </tr>
 <tr class="odd">
-<td><p><code class="ph codeph">org.apache.hawq.pxf.plugins.hdfs.AvroResolver</code></p></td>
+<td><p>org.apache.hawq.pxf.plugins.hdfs.AvroResolver</p></td>
 <td><p>Supports the same field objects as <code class="ph codeph">WritableResolver</code>. </p></td>
 </tr>
 <tr class="even">
-<td><p><code class="ph codeph">org.apache.hawq.pxf.plugins.hbase.HBaseResolver</code></p></td>
+<td><p>org.apache.hawq.pxf.plugins.hbase.HBaseResolver</p></td>
 <td><p>Supports the same field objects as <code class="ph codeph">WritableResolver</code> and also supports the following:</p>
 <pre class="pre codeblock"><code>DataType.SMALLINT
 DataType.NUMERIC
@@ -1599,20 +1662,24 @@ DataType.BPCHAR
 DataType.TIMESTAMP</code></pre></td>
 </tr>
 <tr class="odd">
-<td><p><code class="ph codeph">org.apache.hawq.pxf.plugins.hive.HiveResolver</code></p></td>
+<td><p>org.apache.hawq.pxf.plugins.hive.HiveResolver</p></td>
 <td><p>Supports the same field objects as <code class="ph codeph">WritableResolver</code> and also supports the following:</p>
 <pre class="pre codeblock"><code>DataType.SMALLINT
 DataType.TEXT
 DataType.TIMESTAMP</code></pre></td>
 </tr>
 <tr class="even">
-<td><p><code class="ph codeph">org.apache.hawq.pxf.plugins.hive.HiveStringPassResolver</code></p></td>
+<td><p>org.apache.hawq.pxf.plugins.hive.HiveStringPassResolver</p></td>
 <td>Specialized <code class="ph codeph">HiveResolver</code> for a Hive table stored as Text files. Should be used together with <code class="ph codeph">HiveInputFormatFragmenter</code>/<code class="ph codeph">HiveLineBreakAccessor</code>.</td>
 </tr>
 <tr class="odd">
-<td><code class="ph codeph">org.apache.hawq.pxf.plugins.hive.HiveColumnarSerdeResolver</code></td>
+<td>org.apache.hawq.pxf.plugins.hive.HiveColumnarSerdeResolver</td>
 <td>Specialized <code class="ph codeph">HiveResolver</code> for a Hive table stored as RC file. Should be used together with <code class="ph codeph">HiveInputFormatFragmenter</code>/<code class="ph codeph">HiveRCFileAccessor</code>.</td>
 </tr>
+<tr class="odd">
+<td>org.apache.hawq.pxf.plugins.hive.HiveORCSerdeResolver</td>
+<td>Specialized <code class="ph codeph">HiveResolver</code> for a Hive table stored in ORC format. Should be used together with <code class="ph codeph">HiveInputFormatFragmenter</code>/<code class="ph codeph">HiveORCAccessor</code>.</td>
+</tr>
 </tbody>
 </table>
 
@@ -1647,8 +1714,8 @@ DataType.TIMESTAMP</code></pre></td>
 <p><strong>Note:</strong></p>
 
 <ul>
-<li>  getFields should return a List&lt;OneField&gt;, each OneField representing a single field.</li>
-<li>  <code>setFields </code>should return a single <code>OneRow </code>object, given a List&lt;OneField&gt;.</li>
+<li>  <code>getFields()</code> should return a <code>List&lt;OneField&gt;</code>, with each <code>OneField</code> representing a single field.</li>
+<li>  <code>setFields()</code> should return a single <code>OneRow</code> object, given a <code>List&lt;OneField&gt;</code>.</li>
 </ul>
 
 <h4><a id="com.pivotal.pxf.api.onefield"></a>org.apache.hawq.pxf.api.OneField</h4>
@@ -1670,7 +1737,7 @@ DataType.TIMESTAMP</code></pre></td>
 <span class="o">}</span>
 </code></pre>
 
-<p>The value of <code>type</code> should follow the org.apache.hawq.pxf.api.io.DataType <code>enums</code>. <code>val</code> is the appropriate Java class. Supported types are as follows:</p>
+<p>The value of <code>type</code> should follow the <code>org.apache.hawq.pxf.api.io.DataType</code> <code>enums</code>. <code>val</code> is the appropriate Java class. Supported types are:</p>
 
 <p><a id="com.pivotal.pxf.api.onefield__table_f4x_35z_4p"></a></p>
 
@@ -1742,19 +1809,13 @@ DataType.TIMESTAMP</code></pre></td>
 </tbody>
 </table>
 
-<h3><a id="analyzer"></a>Analyzer</h3>
-
-<p>The Analyzer has been deprecated. A new function in the Fragmenter API (Fragmenter.getFragmentsStats) is used to gather initial statistics for the data source, and provides PXF statistical data for the HAWQ query optimizer. For a detailed explanation about HAWQ statistical data gathering, see <code>ANALYZE</code> in the SQL Command Reference.</p>
-
-<p>Using the Analyzer API will result in an error message. Use the Fragmenter and getFragmentsStats to gather advanced statistics.</p>
-
 <h2><a id="aboutcustomprofiles"></a>About Custom Profiles</h2>
 
-<p>Administrators can add new profiles or edit the built-in profiles in <code>/etc/conf/pxf-profiles.xml</code> file. See <a href="/docs/userguide/2.1.0.0-incubating/pxf/ReadWritePXF.html#readingandwritingdatawithpxf">Using Profiles to Read and Write Data</a> for information on how to add custom profiles.</p>
+<p>Administrators can add new profiles or edit the built-in profiles in <code>/etc/pxf/conf/pxf-profiles.xml</code>. See <a href="/docs/userguide/2.1.0.0-incubating/pxf/ReadWritePXF.html#readingandwritingdatawithpxf">Using Profiles to Read and Write Data</a> for information on how to add custom profiles.</p>
 
 <h2><a id="aboutqueryfilterpush-down"></a>About Query Filter Push-Down</h2>
 
-<p>If a query includes a number of WHERE clause filters,  HAWQ may push all or some queries to PXF. If pushed to PXF, the Accessor can use the filtering information when accessing the data source to fetch tuples. These filters only return records that pass filter evaluation conditions. This reduces data processing and reduces network traffic from the SQL engine.</p>
+<p>If a query includes a number of <code>WHERE</code> clause filters,  HAWQ may push all or some queries to PXF. If pushed to PXF, the <code>Accessor</code> can use the filtering information when accessing the data source to fetch tuples. These filters only return records that pass filter evaluation conditions. This reduces data processing and reduces network traffic from the SQL engine.</p>
 
 <p>This topic includes the following information:</p>
 
@@ -1775,11 +1836,11 @@ DataType.TIMESTAMP</code></pre></td>
 <li>  Uses only expressions of supported data types and operators.</li>
 </ul>
 
-<p>FilterParser scans the pushed down filter list and uses the user&rsquo;s build() implementation to build the filter.</p>
+<p><code>FilterParser</code> scans the pushed down filter list and uses the user&rsquo;s <code>build()</code> implementation to build the filter.</p>
 
 <ul>
-<li>  For simple expressions (e.g, a &gt;= 5), FilterParser places column objects on the left of the expression and constants on the right.</li>
-<li>  For compound expressions (e.g &lt;expression&gt; AND &lt;expression&gt;) it handles three cases in the build() function:
+<li>  For simple expressions (e.g, a &gt;= 5), <code>FilterParser</code> places column objects on the left of the expression and constants on the right.</li>
+<li>  For compound expressions (e.g &lt;expression&gt; AND &lt;expression&gt;) it handles three cases in the <code>build()</code> function:
 
 <ol>
 <li> Simple Expression: &lt;Column Index&gt; &lt;Operation&gt; &lt;Constant&gt;</li>
@@ -1790,7 +1851,7 @@ DataType.TIMESTAMP</code></pre></td>
 
 <h3><a id="creatingafilterbuilderclass"></a>Creating a Filter Builder Class</h3>
 
-<p>To check if a filter queried PXF, call the <code>InputData                   hasFilter()</code> function:</p>
+<p>To check if a filter queried PXF, call the <code>InputData.hasFilter()</code> function:</p>
 <pre class="highlight java"><code><span class="cm">/*
  * Returns true if there is a filter string to parse
  */</span>
@@ -1800,7 +1861,7 @@ DataType.TIMESTAMP</code></pre></td>
 <span class="o">}</span>
 </code></pre>
 
-<p>If <code>hasFilter()</code> returns <code>false</code>, there is no filter information. If it returns <code>true</code>, PXF parses the serialized filter string into a meaningful filter object to use later. To do so, create a filter builder class that implements the <code>FilterParser.FilterBuilder </code> interface:</p>
+<p>If <code>hasFilter()</code> returns <code>false</code>, there is no filter information. If it returns <code>true</code>, PXF parses the serialized filter string into a meaningful filter object to use later. To do so, create a filter builder class that implements the <code>FilterParser.FilterBuilder</code> interface:</p>
 <pre class="highlight java"><code><span class="kn">package</span> <span class="n">org</span><span class="o">.</span><span class="na">apache</span><span class="o">.</span><span class="na">hawq</span><span class="o">.</span><span class="na">pxf</span><span class="o">.</span><span class="na">api</span><span class="o">;</span>
 <span class="cm">/*
  * Interface a user of FilterParser should implement
@@ -1815,7 +1876,7 @@ DataType.TIMESTAMP</code></pre></td>
 <span class="o">}</span>
 </code></pre>
 
-<p>While PXF parses the serialized filter string from the incoming HAWQ query, it calls the <code>build() interface</code> function. PXF calls this function for each condition or filter pushed down to PXF. Implementing this function returns some Filter object or representation that the Fragmenter, Accessor, or Resolver uses in runtime to filter out records. The <code>build()</code> function accepts an Operation as input, and left and right operands.</p>
+<p>While PXF parses the serialized filter string from the incoming HAWQ query, it calls the <code>build()</code> function. PXF calls this function for each condition or filter pushed down to PXF. Implementing this function returns some Filter object or representation that the <code>Fragmenter</code>, <code>Accessor</code>, or <code>Resolver</code> uses in runtime to filter out records. The <code>build()</code> function accepts an Operation as input, and left and right operands.</p>
 
 <h3><a id="filteroperations"></a>Filter Operations</h3>
 <pre class="highlight java"><code><span class="cm">/*
@@ -1829,8 +1890,20 @@ DataType.TIMESTAMP</code></pre></td>
     <span class="n">HDOP_GE</span><span class="o">,</span> <span class="c1">//greater than or equal</span>
     <span class="n">HDOP_EQ</span><span class="o">,</span> <span class="c1">//equal</span>
     <span class="n">HDOP_NE</span><span class="o">,</span> <span class="c1">//not equal</span>
-    <span class="n">HDOP_AND</span> <span class="c1">//AND'ed conditions</span>
+    <span class="n">HDOP_LIKE</span><span class="o">,</span>
+    <span class="n">HDOP_IS_NULL</span><span class="o">,</span>
+    <span class="n">HDOP_IS_NOT_NULL</span><span class="o">,</span>
+    <span class="n">HDOP_IN</span>
 <span class="o">};</span>
+
+<span class="cm">/**
+ * Logical operators
+ */</span>
+<span class="kd">public</span> <span class="kd">enum</span> <span class="n">LogicalOperation</span> <span class="o">{</span>
+    <span class="n">HDOP_AND</span><span class="o">,</span>
+    <span class="n">HDOP_OR</span><span class="o">,</span>
+    <span class="n">HDOP_NOT</span>
+<span class="o">}</span>
 </code></pre>
 
 <h4><a id="filteroperands"></a>Filter Operands</h4>
@@ -1869,7 +1942,7 @@ DataType.TIMESTAMP</code></pre></td>
 
 <h4><a id="filterobject"></a>Filter Object</h4>
 
-<p>Filter Objects can be internal, such as those you define; or external, those that the remote system uses. For example, for HBase, you define the HBase <code>Filter</code> class (<code>org.apache.hadoop.hbase.filter.Filter</code>), while for Hive, you use an internal default representation created by the PXF framework, called <code>BasicFilter</code>. You can decide the filter object to use, including writing a new one. <code>BasicFilter</code> is the most common:</p>
+<p>Filter Objects can be internal - such as those you define - or external, those that the remote system uses. For example, for HBase you define the HBase <code>Filter</code> class (<code>org.apache.hadoop.hbase.filter.Filter</code>), while for Hive you use an internal default representation created by the PXF framework, called <code>BasicFilter</code>. You can choose the filter object to use, including writing a new one. <code>BasicFilter</code> is the most common:</p>
 <pre class="highlight java"><code><span class="cm">/*
  * Basic filter provided for cases where the target storage system does not provide its own filter
  * For example: Hbase storage provides its own filter but for a Writable based record in a SequenceFile
@@ -1901,7 +1974,7 @@ DataType.TIMESTAMP</code></pre></td>
 
 <h3><a id="sampleimplementation"></a>Sample Implementation</h3>
 
-<p>Let&rsquo;s look at the following sample implementation of the filter builder class and its <code>build()</code> function that handles all 3 cases. Let&rsquo;s assume that BasicFilter was used to hold our filter operations.</p>
+<p>Let&rsquo;s look at the following sample implementation of the filter builder class and its <code>build()</code> function that handles all 3 cases. Let&rsquo;s assume that <code>BasicFilter</code> was used to hold our filter operations.</p>
 <pre class="highlight java"><code><span class="kn">import</span> <span class="nn">java.util.LinkedList</span><span class="o">;</span>
 <span class="kn">import</span> <span class="nn">java.util.List</span><span class="o">;</span>
 
@@ -1983,7 +2056,7 @@ DataType.TIMESTAMP</code></pre></td>
 <span class="o">}</span>
 </code></pre>
 
-<p>Here is an example of creating a filter-builder class to implement the Filter interface, implement the <code>build()</code> function, and generate the Filter object. To do this, use either the Accessor, Resolver, or both to call the <code>getFilterObject</code> function:</p>
+<p>Here is an example of creating a filter-builder class to implement the Filter interface, implement the <code>build()</code> function, and generate the Filter object. To do this, use either the <code>Accessor</code>, <code>Resolver</code>, or both to call the <code>getFilterObject()</code> function:</p>
 <pre class="highlight java"><code><span class="k">if</span> <span class="o">(</span><span class="n">inputData</span><span class="o">.</span><span class="na">hasFilter</span><span class="o">())</span>
 <span class="o">{</span>
     <span class="n">String</span> <span class="n">filterStr</span> <span class="o">=</span> <span class="n">inputData</span><span class="o">.</span><span class="na">filterString</span><span class="o">();</span>
@@ -2012,7 +2085,7 @@ DataType.TIMESTAMP</code></pre></td>
 <span class="o">}</span>
 </code></pre>
 
-<p>Example of evaluating a single filter:</p>
+<p>Example showing evaluation of a single filter:</p>
 <pre class="highlight java"><code><span class="c1">//Get our BasicFilter Object</span>
 <span class="n">FilterParser</span><span class="o">.</span><span class="na">BasicFilter</span> <span class="n">bFilter</span> <span class="o">=</span> <span class="o">(</span><span class="n">FilterParser</span><span class="o">.</span><span class="na">BasicFilter</span><span class="o">)</span><span class="n">filter</span><span class="o">;</span>
 
@@ -2100,7 +2173,7 @@ DataType.TIMESTAMP</code></pre></td>
 
 <h3><a id="pluginexamples"></a>Plug-in Examples</h3>
 
-<p>This section contains sample dummy implementations of all three plug-ins. It also contains a usage example.</p>
+<p>This section contains sample dummy implementations of all three plug-ins. It also includes a usage example.</p>
 
 <h4><a id="dummyfragmenter"></a>Dummy Fragmenter</h4>
 <pre class="highlight java"><code><span class="kn">import</span> <span class="nn">org.apache.hawq.pxf.api.Fragmenter</span><span class="o">;</span>

http://git-wip-us.apache.org/repos/asf/incubator-hawq-site/blob/1b0cdd8e/docs/userguide/2.1.0.0-incubating/pxf/ReadWritePXF.html
----------------------------------------------------------------------
diff --git a/docs/userguide/2.1.0.0-incubating/pxf/ReadWritePXF.html b/docs/userguide/2.1.0.0-incubating/pxf/ReadWritePXF.html
index b3834d9..393d53b 100644
--- a/docs/userguide/2.1.0.0-incubating/pxf/ReadWritePXF.html
+++ b/docs/userguide/2.1.0.0-incubating/pxf/ReadWritePXF.html
@@ -170,6 +170,9 @@
           <li>
             <a href="/docs/userguide/2.1.0.0-incubating/admin/monitor.html">Monitoring a HAWQ System</a>
           </li>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/admin/logfiles.html">HAWQ Administrative Log Files</a>
+          </li>
         </ul>
       </li>
       <li class="has_submenu">
@@ -443,6 +446,7 @@
       </li>
       <li class="has_submenu"><a href="/docs/userguide/2.1.0.0-incubating/bestpractices/HAWQBestPracticesOverview.html">Best Practices</a>
         <ul>
+          <li><a href="/docs/userguide/2.1.0.0-incubating/bestpractices/config_hawq_bestpractices.html">Configuring HAWQ</a></li>
           <li><a href="/docs/userguide/2.1.0.0-incubating/bestpractices/operating_hawq_bestpractices.html">Operating HAWQ</a></li>
           <li><a href="/docs/userguide/2.1.0.0-incubating/bestpractices/secure_bestpractices.html">Securing HAWQ</a></li>
           <li><a href="/docs/userguide/2.1.0.0-incubating/bestpractices/managing_resources_bestpractices.html">Managing Resources</a></li>
@@ -565,11 +569,17 @@
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_filerep_tcp_keepalives_interval">gp_filerep_tcp_keepalives_interval</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_hashjoin_tuples_per_bucket">gp_hashjoin_tuples_per_bucket</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_idf_deduplicate">gp_idf_deduplicate</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_cache_future_packets">gp_interconnect_cache_future_packets</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_default_rtt">gp_interconnect_default_rtt</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_fc_method">gp_interconnect_fc_method</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_hash_multiplier">gp_interconnect_hash_multiplier</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_min_retries_before_timeout">gp_interconnect_min_retries_before_timeout</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_min_rto">gp_interconnect_min_rto</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_queue_depth">gp_interconnect_queue_depth</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_setup_timeout">gp_interconnect_setup_timeout</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_snd_queue_depth">gp_interconnect_snd_queue_depth</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_timer_checking_period">gp_interconnect_timer_checking_period</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_timer_period">gp_interconnect_timer_period</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_type">gp_interconnect_type</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_log_format">gp_log_format</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_max_csv_line_length">gp_max_csv_line_length</a></li>
@@ -970,7 +980,26 @@
         <div class="to-top" id="js-to-top">
           <a href="#top" title="back to top"></a>
         </div>
-        <p>PXF profiles are collections of common metadata attributes that can be used to simplify the reading and writing of data. You can use any of the built-in profiles that come with PXF or you can create your own.</p>
+        <!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<p>PXF profiles are collections of common metadata attributes that can be used to simplify the reading and writing of data. You can use any of the built-in profiles that come with PXF or you can create your own.</p>
 
 <p>For example, if you are writing single line records to text files on HDFS, you could use the built-in HdfsTextSimple profile. You specify this profile when you create the PXF external table used to write the data to HDFS.</p>
 
@@ -985,7 +1014,7 @@
 <li>  JSON (Read only)</li>
 </ul>
 
-<p>You can specify a built-in profile when you want to read data that exists inside HDFS files, Hive tables, HBase tables, and JSON files and for writing data into HDFS files.</p>
+<p>You can specify a built-in profile when you want to read data that exists inside HDFS files, Hive tables, HBase tables, or JSON files, and when you want to write data into HDFS files.</p>
 
 <table>
 <colgroup>
@@ -997,7 +1026,7 @@
 <tr class="header">
 <th>Profile</th>
 <th>Description</th>
-<th>Fragmenter/Accessor/Resolver</th>
+<th>Fragmenter/Accessor/Resolver/Metadata/OutputFormat</th>
 </tr>
 </thead>
 <tbody>
@@ -1026,6 +1055,8 @@
 <li>org.apache.hawq.pxf.plugins.hive.HiveDataFragmenter</li>
 <li>org.apache.hawq.pxf.plugins.hive.HiveAccessor</li>
 <li>org.apache.hawq.pxf.plugins.hive.HiveResolver</li>
+<li>org.apache.hawq.pxf.plugins.hive.HiveMetadataFetcher</li>
+<li>org.apache.hawq.pxf.service.io.GPDBWritable</li>
 </ul></td>
 </tr>
 <tr class="even">
@@ -1038,6 +1069,20 @@ Note: The <code class="ph codeph">DELIMITER</code> parameter is mandatory.
 <li>org.apache.hawq.pxf.plugins.hive.HiveInputFormatFragmenter</li>
 <li>org.apache.hawq.pxf.plugins.hive.HiveRCFileAccessor</li>
 <li>org.apache.hawq.pxf.plugins.hive.HiveColumnarSerdeResolver</li>
+<li>org.apache.hawq.pxf.plugins.hive.HiveMetadataFetcher</li>
+<li>org.apache.hawq.pxf.service.io.Text</li>
+</ul></td>
+</tr>
+<tr class="odd">
+<td>HiveORC</td>
+<td>Optimized read of a Hive table where each partition is stored as an ORC file.
+</td>
+<td><ul>
+<li>org.apache.hawq.pxf.plugins.hive.HiveInputFormatFragmenter</li>
+<li>org.apache.hawq.pxf.plugins.hive.HiveORCAccessor</li>
+<li>org.apache.hawq.pxf.plugins.hive.HiveORCSerdeResolver</li>
+<li>org.apache.hawq.pxf.plugins.hive.HiveMetadataFetcher</li>
+<li>org.apache.hawq.pxf.service.io.GPDBWritable</li>
 </ul></td>
 </tr>
 <tr class="odd">
@@ -1050,6 +1095,8 @@ Note: The <code class="ph codeph">DELIMITER</code> parameter is mandatory.
 <li>org.apache.hawq.pxf.plugins.hive.HiveInputFormatFragmenter</li>
 <li>org.apache.hawq.pxf.plugins.hive.HiveLineBreakAccessor</li>
 <li>org.apache.hawq.pxf.plugins.hive.HiveStringPassResolver</li>
+<li>org.apache.hawq.pxf.plugins.hive.HiveMetadataFetcher</li>
+<li>org.apache.hawq.pxf.service.io.Text</li>
 </ul></td>
 </tr>
 <tr class="even">
@@ -1082,6 +1129,8 @@ Note: The <code class="ph codeph">DELIMITER</code> parameter is mandatory.
 </tbody>
 </table>
 
+<p><strong>Notes</strong>: Metadata identifies the Java class that provides field definitions in the relation. OutputFormat identifies the output serialization format (text or binary) for which a specific profile is optimized. While the built-in <code>Hive*</code> profiles provide Metadata and OutputFormat classes, other profiles may have no need to implement or specify these classes.</p>
+
 <h2><a id="addingandupdatingprofiles"></a>Adding and Updating Profiles</h2>
 
 <p>Each profile has a mandatory unique name and an optional description. In addition, each profile contains a set of plug-ins that are an extensible set of metadata attributes.  Administrators can add new profiles or edit the built-in profiles defined in <code>/etc/pxf/conf/pxf-profiles.xml</code>. </p>

http://git-wip-us.apache.org/repos/asf/incubator-hawq-site/blob/1b0cdd8e/docs/userguide/2.1.0.0-incubating/pxf/TroubleshootingPXF.html
----------------------------------------------------------------------
diff --git a/docs/userguide/2.1.0.0-incubating/pxf/TroubleshootingPXF.html b/docs/userguide/2.1.0.0-incubating/pxf/TroubleshootingPXF.html
index 168d596..e9f5c63 100644
--- a/docs/userguide/2.1.0.0-incubating/pxf/TroubleshootingPXF.html
+++ b/docs/userguide/2.1.0.0-incubating/pxf/TroubleshootingPXF.html
@@ -170,6 +170,9 @@
           <li>
             <a href="/docs/userguide/2.1.0.0-incubating/admin/monitor.html">Monitoring a HAWQ System</a>
           </li>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/admin/logfiles.html">HAWQ Administrative Log Files</a>
+          </li>
         </ul>
       </li>
       <li class="has_submenu">
@@ -443,6 +446,7 @@
       </li>
       <li class="has_submenu"><a href="/docs/userguide/2.1.0.0-incubating/bestpractices/HAWQBestPracticesOverview.html">Best Practices</a>
         <ul>
+          <li><a href="/docs/userguide/2.1.0.0-incubating/bestpractices/config_hawq_bestpractices.html">Configuring HAWQ</a></li>
           <li><a href="/docs/userguide/2.1.0.0-incubating/bestpractices/operating_hawq_bestpractices.html">Operating HAWQ</a></li>
           <li><a href="/docs/userguide/2.1.0.0-incubating/bestpractices/secure_bestpractices.html">Securing HAWQ</a></li>
           <li><a href="/docs/userguide/2.1.0.0-incubating/bestpractices/managing_resources_bestpractices.html">Managing Resources</a></li>
@@ -565,11 +569,17 @@
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_filerep_tcp_keepalives_interval">gp_filerep_tcp_keepalives_interval</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_hashjoin_tuples_per_bucket">gp_hashjoin_tuples_per_bucket</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_idf_deduplicate">gp_idf_deduplicate</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_cache_future_packets">gp_interconnect_cache_future_packets</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_default_rtt">gp_interconnect_default_rtt</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_fc_method">gp_interconnect_fc_method</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_hash_multiplier">gp_interconnect_hash_multiplier</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_min_retries_before_timeout">gp_interconnect_min_retries_before_timeout</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_min_rto">gp_interconnect_min_rto</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_queue_depth">gp_interconnect_queue_depth</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_setup_timeout">gp_interconnect_setup_timeout</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_snd_queue_depth">gp_interconnect_snd_queue_depth</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_timer_checking_period">gp_interconnect_timer_checking_period</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_timer_period">gp_interconnect_timer_period</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_type">gp_interconnect_type</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_log_format">gp_log_format</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_max_csv_line_length">gp_max_csv_line_length</a></li>
@@ -981,7 +991,26 @@
         <div class="to-top" id="js-to-top">
           <a href="#top" title="back to top"></a>
         </div>
-        <h2><a id="pxerrortbl"></a>PXF Errors</h2>
+        <!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<h2><a id="pxerrortbl"></a>PXF Errors</h2>
 
 <p>The following table lists some common errors encountered while using PXF:</p>
 
@@ -1146,6 +1175,8 @@
 
 <h3><a id="pxfdblogmsg"></a>Database-Level Logging</h3>
 
+<p>Database-level logging may provide insight into internal PXF service operations. Additionally, when you access Hive tables using <code>hcatalog</code> or the <code>Hive*</code> profiles, log messages identify the underlying <code>Hive*</code> profile(s) employed to access the data.</p>
+
 <p>Enable HAWQ and PXF debug message logging during operations on PXF external tables by setting the <code>client_min_messages</code> server configuration parameter to <code>DEBUG2</code> in your <code>psql</code> session.</p>
 <pre class="highlight shell"><code><span class="gp">$ </span>psql
 </code></pre>
@@ -1158,6 +1189,8 @@
 <span class="n">DEBUG2</span><span class="p">:</span>  <span class="n">churl</span> <span class="n">http</span> <span class="n">header</span><span class="p">:</span> <span class="n">cell</span> <span class="o">#</span><span class="mi">22</span><span class="p">:</span> <span class="n">X</span><span class="o">-</span><span class="n">GP</span><span class="o">-</span><span class="n">profile</span><span class="p">:</span> <span class="n">Hive</span>
 <span class="n">DEBUG2</span><span class="p">:</span>  <span class="n">churl</span> <span class="n">http</span> <span class="n">header</span><span class="p">:</span> <span class="n">cell</span> <span class="o">#</span><span class="mi">23</span><span class="p">:</span> <span class="n">X</span><span class="o">-</span><span class="n">GP</span><span class="o">-</span><span class="n">URI</span><span class="p">:</span> <span class="n">pxf</span><span class="p">:</span><span class="o">//</span><span class="n">namenode</span><span class="p">:</span><span class="mi">51200</span><span class="o">/</span><span class="n">pxf_hive1</span><span class="o">?</span><span class="n">profile</span><span class="o">=</span><span class="n">Hive</span>
 <span class="p">...</span>
+<span class="n">DEBUG2</span><span class="p">:</span>  <span class="n">pxf</span><span class="p">:</span> <span class="n">set_current_fragment_headers</span><span class="p">:</span> <span class="k">using</span> <span class="n">profile</span><span class="p">:</span> <span class="n">Hive</span>
+<span class="p">...</span>
 </code></pre>
 
 <p>Examine/collect the log messages from <code>stdout</code>.</p>

http://git-wip-us.apache.org/repos/asf/incubator-hawq-site/blob/1b0cdd8e/docs/userguide/2.1.0.0-incubating/query/HAWQQueryProcessing.html
----------------------------------------------------------------------
diff --git a/docs/userguide/2.1.0.0-incubating/query/HAWQQueryProcessing.html b/docs/userguide/2.1.0.0-incubating/query/HAWQQueryProcessing.html
index f54e7e8..2a32655 100644
--- a/docs/userguide/2.1.0.0-incubating/query/HAWQQueryProcessing.html
+++ b/docs/userguide/2.1.0.0-incubating/query/HAWQQueryProcessing.html
@@ -170,6 +170,9 @@
           <li>
             <a href="/docs/userguide/2.1.0.0-incubating/admin/monitor.html">Monitoring a HAWQ System</a>
           </li>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/admin/logfiles.html">HAWQ Administrative Log Files</a>
+          </li>
         </ul>
       </li>
       <li class="has_submenu">
@@ -443,6 +446,7 @@
       </li>
       <li class="has_submenu"><a href="/docs/userguide/2.1.0.0-incubating/bestpractices/HAWQBestPracticesOverview.html">Best Practices</a>
         <ul>
+          <li><a href="/docs/userguide/2.1.0.0-incubating/bestpractices/config_hawq_bestpractices.html">Configuring HAWQ</a></li>
           <li><a href="/docs/userguide/2.1.0.0-incubating/bestpractices/operating_hawq_bestpractices.html">Operating HAWQ</a></li>
           <li><a href="/docs/userguide/2.1.0.0-incubating/bestpractices/secure_bestpractices.html">Securing HAWQ</a></li>
           <li><a href="/docs/userguide/2.1.0.0-incubating/bestpractices/managing_resources_bestpractices.html">Managing Resources</a></li>
@@ -565,11 +569,17 @@
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_filerep_tcp_keepalives_interval">gp_filerep_tcp_keepalives_interval</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_hashjoin_tuples_per_bucket">gp_hashjoin_tuples_per_bucket</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_idf_deduplicate">gp_idf_deduplicate</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_cache_future_packets">gp_interconnect_cache_future_packets</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_default_rtt">gp_interconnect_default_rtt</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_fc_method">gp_interconnect_fc_method</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_hash_multiplier">gp_interconnect_hash_multiplier</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_min_retries_before_timeout">gp_interconnect_min_retries_before_timeout</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_min_rto">gp_interconnect_min_rto</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_queue_depth">gp_interconnect_queue_depth</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_setup_timeout">gp_interconnect_setup_timeout</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_snd_queue_depth">gp_interconnect_snd_queue_depth</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_timer_checking_period">gp_interconnect_timer_checking_period</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_timer_period">gp_interconnect_timer_period</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_type">gp_interconnect_type</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_log_format">gp_log_format</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_max_csv_line_length">gp_max_csv_line_length</a></li>
@@ -971,7 +981,26 @@
         <div class="to-top" id="js-to-top">
           <a href="#top" title="back to top"></a>
         </div>
-        <p>This topic provides an overview of how HAWQ processes queries. Understanding this process can be useful when writing and tuning queries.</p>
+        <!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<p>This topic provides an overview of how HAWQ processes queries. Understanding this process can be useful when writing and tuning queries.</p>
 
 <p>Users issue queries to HAWQ as they would to any database management system. They connect to the database instance on the HAWQ master host using a client application such as <code>psql</code> and submit SQL statements.</p>
 

http://git-wip-us.apache.org/repos/asf/incubator-hawq-site/blob/1b0cdd8e/docs/userguide/2.1.0.0-incubating/query/defining-queries.html
----------------------------------------------------------------------
diff --git a/docs/userguide/2.1.0.0-incubating/query/defining-queries.html b/docs/userguide/2.1.0.0-incubating/query/defining-queries.html
index 7c2fdd4..b5e02af 100644
--- a/docs/userguide/2.1.0.0-incubating/query/defining-queries.html
+++ b/docs/userguide/2.1.0.0-incubating/query/defining-queries.html
@@ -170,6 +170,9 @@
           <li>
             <a href="/docs/userguide/2.1.0.0-incubating/admin/monitor.html">Monitoring a HAWQ System</a>
           </li>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/admin/logfiles.html">HAWQ Administrative Log Files</a>
+          </li>
         </ul>
       </li>
       <li class="has_submenu">
@@ -443,6 +446,7 @@
       </li>
       <li class="has_submenu"><a href="/docs/userguide/2.1.0.0-incubating/bestpractices/HAWQBestPracticesOverview.html">Best Practices</a>
         <ul>
+          <li><a href="/docs/userguide/2.1.0.0-incubating/bestpractices/config_hawq_bestpractices.html">Configuring HAWQ</a></li>
           <li><a href="/docs/userguide/2.1.0.0-incubating/bestpractices/operating_hawq_bestpractices.html">Operating HAWQ</a></li>
           <li><a href="/docs/userguide/2.1.0.0-incubating/bestpractices/secure_bestpractices.html">Securing HAWQ</a></li>
           <li><a href="/docs/userguide/2.1.0.0-incubating/bestpractices/managing_resources_bestpractices.html">Managing Resources</a></li>
@@ -565,11 +569,17 @@
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_filerep_tcp_keepalives_interval">gp_filerep_tcp_keepalives_interval</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_hashjoin_tuples_per_bucket">gp_hashjoin_tuples_per_bucket</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_idf_deduplicate">gp_idf_deduplicate</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_cache_future_packets">gp_interconnect_cache_future_packets</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_default_rtt">gp_interconnect_default_rtt</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_fc_method">gp_interconnect_fc_method</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_hash_multiplier">gp_interconnect_hash_multiplier</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_min_retries_before_timeout">gp_interconnect_min_retries_before_timeout</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_min_rto">gp_interconnect_min_rto</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_queue_depth">gp_interconnect_queue_depth</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_setup_timeout">gp_interconnect_setup_timeout</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_snd_queue_depth">gp_interconnect_snd_queue_depth</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_timer_checking_period">gp_interconnect_timer_checking_period</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_timer_period">gp_interconnect_timer_period</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_type">gp_interconnect_type</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_log_format">gp_log_format</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_max_csv_line_length">gp_max_csv_line_length</a></li>
@@ -990,7 +1000,26 @@
         <div class="to-top" id="js-to-top">
           <a href="#top" title="back to top"></a>
         </div>
-        <p>HAWQ is based on the PostgreSQL implementation of the SQL standard. SQL commands are typically entered using the standard PostgreSQL interactive terminal <code>psql</code>, but other programs that have similar functionality can be used as well.</p>
+        <!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<p>HAWQ is based on the PostgreSQL implementation of the SQL standard. SQL commands are typically entered using the standard PostgreSQL interactive terminal <code>psql</code>, but other programs that have similar functionality can be used as well.</p>
 
 <h2><a id="topic3"></a>SQL Lexicon</h2>
 

http://git-wip-us.apache.org/repos/asf/incubator-hawq-site/blob/1b0cdd8e/docs/userguide/2.1.0.0-incubating/query/functions-operators.html
----------------------------------------------------------------------
diff --git a/docs/userguide/2.1.0.0-incubating/query/functions-operators.html b/docs/userguide/2.1.0.0-incubating/query/functions-operators.html
index 3784a83..ceab623 100644
--- a/docs/userguide/2.1.0.0-incubating/query/functions-operators.html
+++ b/docs/userguide/2.1.0.0-incubating/query/functions-operators.html
@@ -170,6 +170,9 @@
           <li>
             <a href="/docs/userguide/2.1.0.0-incubating/admin/monitor.html">Monitoring a HAWQ System</a>
           </li>
+          <li>
+            <a href="/docs/userguide/2.1.0.0-incubating/admin/logfiles.html">HAWQ Administrative Log Files</a>
+          </li>
         </ul>
       </li>
       <li class="has_submenu">
@@ -443,6 +446,7 @@
       </li>
       <li class="has_submenu"><a href="/docs/userguide/2.1.0.0-incubating/bestpractices/HAWQBestPracticesOverview.html">Best Practices</a>
         <ul>
+          <li><a href="/docs/userguide/2.1.0.0-incubating/bestpractices/config_hawq_bestpractices.html">Configuring HAWQ</a></li>
           <li><a href="/docs/userguide/2.1.0.0-incubating/bestpractices/operating_hawq_bestpractices.html">Operating HAWQ</a></li>
           <li><a href="/docs/userguide/2.1.0.0-incubating/bestpractices/secure_bestpractices.html">Securing HAWQ</a></li>
           <li><a href="/docs/userguide/2.1.0.0-incubating/bestpractices/managing_resources_bestpractices.html">Managing Resources</a></li>
@@ -565,11 +569,17 @@
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_filerep_tcp_keepalives_interval">gp_filerep_tcp_keepalives_interval</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_hashjoin_tuples_per_bucket">gp_hashjoin_tuples_per_bucket</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_idf_deduplicate">gp_idf_deduplicate</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_cache_future_packets">gp_interconnect_cache_future_packets</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_default_rtt">gp_interconnect_default_rtt</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_fc_method">gp_interconnect_fc_method</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_hash_multiplier">gp_interconnect_hash_multiplier</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_min_retries_before_timeout">gp_interconnect_min_retries_before_timeout</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_min_rto">gp_interconnect_min_rto</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_queue_depth">gp_interconnect_queue_depth</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_setup_timeout">gp_interconnect_setup_timeout</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_snd_queue_depth">gp_interconnect_snd_queue_depth</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_timer_checking_period">gp_interconnect_timer_checking_period</a></li>
+                  <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_timer_period">gp_interconnect_timer_period</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_interconnect_type">gp_interconnect_type</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_log_format">gp_log_format</a></li>
                   <li><a href="/docs/userguide/2.1.0.0-incubating/reference/guc/parameter_definitions.html#gp_max_csv_line_length">gp_max_csv_line_length</a></li>
@@ -980,7 +990,26 @@
         <div class="to-top" id="js-to-top">
           <a href="#top" title="back to top"></a>
         </div>
-        <p>HAWQ evaluates functions and operators used in SQL expressions.</p>
+        <!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<p>HAWQ evaluates functions and operators used in SQL expressions.</p>
 
 <h2><a id="topic27"></a>Using Functions in HAWQ</h2>
 
@@ -1127,7 +1156,7 @@ HAWQ does not support the following:</p>
 
 <h2><a id="userdefinedoperators"></a>User Defined Operators</h2>
 
-<p>Every operator is &quot;syntactic sugar&quot; for a call to an underlying function that does the real work; so you must first create the underlying function before you can create the operator. However, an operator is not merely syntactic sugar, because it carries additional information that helps the query planner optimize queries that use the operator. The next section will be devoted to explaining that additional information.</p>
+<p>Every operator is &ldquo;syntactic sugar&quot; for a call to an underlying function that does the real work; so you must first create the underlying function before you can create the operator. However, an operator is not merely syntactic sugar, because it carries additional information that helps the query planner optimize queries that use the operator. The next section will be devoted to explaining that additional information.</p>
 
 <p>HAWQ supports left unary, right unary, and binary operators. Operators can be overloaded; that is, the same operator name can be used for different operators that have different numbers and types of operands. When a query is executed, the system determines the operator to call from the number and types of the provided operands.</p>
 


Mime
View raw message