hawq-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From dyozie <...@git.apache.org>
Subject [GitHub] incubator-hawq-docs pull request #39: HAWQ-1071 - add examples for HiveText ...
Date Thu, 27 Oct 2016 16:11:17 GMT
Github user dyozie commented on a diff in the pull request:

    https://github.com/apache/incubator-hawq-docs/pull/39#discussion_r85365540
  
    --- Diff: pxf/HivePXF.html.md.erb ---
    @@ -2,121 +2,450 @@
     title: Accessing Hive Data
     ---
     
    -This topic describes how to access Hive data using PXF. You have several options for
querying data stored in Hive. You can create external tables in PXF and then query those tables,
or you can easily query Hive tables by using HAWQ and PXF's integration with HCatalog. HAWQ
accesses Hive table metadata stored in HCatalog.
    +Apache Hive is a distributed data warehousing infrastructure.  Hive facilitates managing
large data sets supporting multiple data formats, including comma-separated value (.csv),
RC, ORC, and parquet. The PXF Hive plug-in reads data stored in Hive, as well as HDFS or HBase.
    +
    +This section describes how to use PXF to access Hive data. Options for querying data
stored in Hive include:
    +
    +-  Creating an external table in PXF and querying that table
    +-  Querying Hive tables via PXF's integration with HCatalog
     
     ## <a id="installingthepxfhiveplugin"></a>Prerequisites
     
    -Check the following before using PXF to access Hive:
    +Before accessing Hive data with HAWQ and PXF, ensure that:
     
    --   The PXF HDFS plug-in is installed on all cluster nodes.
    +-   The PXF HDFS plug-in is installed on all cluster nodes. See [Installing PXF Plug-ins](InstallPXFPlugins.html)
for PXF plug-in installation information.
     -   The PXF Hive plug-in is installed on all cluster nodes.
     -   The Hive JAR files and conf directory are installed on all cluster nodes.
    --   Test PXF on HDFS before connecting to Hive or HBase.
    +-   You have tested PXF on HDFS.
     -   You are running the Hive Metastore service on a machine in your cluster. 
     -   You have set the `hive.metastore.uris` property in the `hive-site.xml` on the NameNode.
     
    +## <a id="topic_p2s_lvl_25"></a>Hive File Formats
    +
    +Hive supports several file formats:
    +
    +-   TextFile - flat file with data in comma-, tab-, or space-separated value format or
JSON notation
    +-   SequenceFile - flat file consisting of binary key/value pairs
    +-   RCFile - record columnar data consisting of binary key/value pairs; high row compression
rate
    +-   ORCFile - optimized row columnar data with stripe, footer, and postscript sections;
reduces data size
    +-   Parquet - compressed columnar data representation
    +-   Avro - JSON-defined, schema-based data serialization format
    --- End diff --
    
    Just a suggestion, but I think this would read better as a 2-column term/definition table.
 You could even make it a 3-column table to describe which PXF plug-ins are used with each
format.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

Mime
View raw message