Return-Path: X-Original-To: apmail-hawq-commits-archive@minotaur.apache.org Delivered-To: apmail-hawq-commits-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 6CCB719524 for ; Thu, 21 Apr 2016 23:04:29 +0000 (UTC) Received: (qmail 48486 invoked by uid 500); 21 Apr 2016 23:04:29 -0000 Delivered-To: apmail-hawq-commits-archive@hawq.apache.org Received: (qmail 48447 invoked by uid 500); 21 Apr 2016 23:04:29 -0000 Mailing-List: contact commits-help@hawq.incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@hawq.incubator.apache.org Delivered-To: mailing list commits@hawq.incubator.apache.org Received: (qmail 48438 invoked by uid 99); 21 Apr 2016 23:04:29 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 21 Apr 2016 23:04:29 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id C105FC2C9F for ; Thu, 21 Apr 2016 23:04:28 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -3.016 X-Spam-Level: X-Spam-Status: No, score=-3.016 tagged_above=-999 required=6.31 tests=[KAM_ASCII_DIVIDERS=0.8, KAM_LAZY_DOMAIN_SECURITY=1, KAM_MANYCOMMENTS=1.2, RCVD_IN_DNSWL_HI=-5, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, RP_MATCHES_RCVD=-0.996] autolearn=disabled Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id MvGbmE26iPw4 for ; Thu, 21 Apr 2016 23:04:24 +0000 (UTC) Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with SMTP id C26255FB9E for ; Thu, 21 Apr 2016 23:04:21 +0000 (UTC) Received: (qmail 48013 invoked by uid 99); 21 Apr 2016 23:04:20 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 21 Apr 2016 23:04:20 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id C4736E08F2; Thu, 21 Apr 2016 23:04:20 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: shivram@apache.org To: commits@hawq.incubator.apache.org Date: Thu, 21 Apr 2016 23:04:32 -0000 Message-Id: In-Reply-To: <7cfd0aabaa89406fb58edc65b6288fe3@git.apache.org> References: <7cfd0aabaa89406fb58edc65b6288fe3@git.apache.org> X-Mailer: ASF-Git Admin Mailer Subject: [13/28] incubator-hawq-site git commit: HAWQ-683. Publish pxf javadoc api http://git-wip-us.apache.org/repos/asf/incubator-hawq-site/blob/ab8cf62a/docs/pxf/javadoc/org/apache/hawq/pxf/plugins/hdfs/package-tree.html ---------------------------------------------------------------------- diff --git a/docs/pxf/javadoc/org/apache/hawq/pxf/plugins/hdfs/package-tree.html b/docs/pxf/javadoc/org/apache/hawq/pxf/plugins/hdfs/package-tree.html new file mode 100644 index 0000000..c635861 --- /dev/null +++ b/docs/pxf/javadoc/org/apache/hawq/pxf/plugins/hdfs/package-tree.html @@ -0,0 +1,161 @@ + + + + + +org.apache.hawq.pxf.plugins.hdfs Class Hierarchy + + + + + + + + + + + +
+

Hierarchy For Package org.apache.hawq.pxf.plugins.hdfs

+Package Hierarchies: + +
+
+

Class Hierarchy

+ +
+ + + + + + http://git-wip-us.apache.org/repos/asf/incubator-hawq-site/blob/ab8cf62a/docs/pxf/javadoc/org/apache/hawq/pxf/plugins/hdfs/utilities/DataSchemaException.MessageFmt.html ---------------------------------------------------------------------- diff --git a/docs/pxf/javadoc/org/apache/hawq/pxf/plugins/hdfs/utilities/DataSchemaException.MessageFmt.html b/docs/pxf/javadoc/org/apache/hawq/pxf/plugins/hdfs/utilities/DataSchemaException.MessageFmt.html new file mode 100644 index 0000000..ac8ba72 --- /dev/null +++ b/docs/pxf/javadoc/org/apache/hawq/pxf/plugins/hdfs/utilities/DataSchemaException.MessageFmt.html @@ -0,0 +1,356 @@ + + + + + +DataSchemaException.MessageFmt + + + + + + + + + + + + +
+
org.apache.hawq.pxf.plugins.hdfs.utilities
+

Enum DataSchemaException.MessageFmt

+
+
+
    +
  • java.lang.Object
  • +
  • + +
  • +
+
+ +
+
+ +
+
+
    +
  • + + + +
      +
    • + + +

      Method Detail

      + + + +
        +
      • +

        values

        +
        public static DataSchemaException.MessageFmt[] values()
        +
        Returns an array containing the constants of this enum type, in +the order they are declared. This method may be used to iterate +over the constants as follows: +
        +for (DataSchemaException.MessageFmt c : DataSchemaException.MessageFmt.values())
        +    System.out.println(c);
        +
        +
        +
        Returns:
        +
        an array containing the constants of this enum type, in the order they are declared
        +
        +
      • +
      + + + +
        +
      • +

        valueOf

        +
        public static DataSchemaException.MessageFmt valueOf(java.lang.String name)
        +
        Returns the enum constant of this type with the specified name. +The string must match exactly an identifier used to declare an +enum constant in this type. (Extraneous whitespace characters are +not permitted.)
        +
        +
        Parameters:
        +
        name - the name of the enum constant to be returned.
        +
        Returns:
        +
        the enum constant with the specified name
        +
        Throws:
        +
        java.lang.IllegalArgumentException - if this enum type has no constant with the specified name
        +
        java.lang.NullPointerException - if the argument is null
        +
        +
      • +
      + + + +
        +
      • +

        getFormat

        +
        public java.lang.String getFormat()
        +
      • +
      +
    • +
    +
  • +
+
+
+ + + + + + + http://git-wip-us.apache.org/repos/asf/incubator-hawq-site/blob/ab8cf62a/docs/pxf/javadoc/org/apache/hawq/pxf/plugins/hdfs/utilities/DataSchemaException.html ---------------------------------------------------------------------- diff --git a/docs/pxf/javadoc/org/apache/hawq/pxf/plugins/hdfs/utilities/DataSchemaException.html b/docs/pxf/javadoc/org/apache/hawq/pxf/plugins/hdfs/utilities/DataSchemaException.html new file mode 100644 index 0000000..139fc14 --- /dev/null +++ b/docs/pxf/javadoc/org/apache/hawq/pxf/plugins/hdfs/utilities/DataSchemaException.html @@ -0,0 +1,332 @@ + + + + + +DataSchemaException + + + + + + + + + + + + +
+
org.apache.hawq.pxf.plugins.hdfs.utilities
+

Class DataSchemaException

+
+
+
    +
  • java.lang.Object
  • +
  • +
      +
    • java.lang.Throwable
    • +
    • +
        +
      • java.lang.Exception
      • +
      • +
          +
        • java.lang.RuntimeException
        • +
        • +
            +
          • org.apache.hawq.pxf.plugins.hdfs.utilities.DataSchemaException
          • +
          +
        • +
        +
      • +
      +
    • +
    +
  • +
+
+ +
+
+
    +
  • + + + + + +
      +
    • + + +

      Method Summary

      + + + + + + + + + + +
      All Methods Instance Methods Concrete Methods 
      Modifier and TypeMethod and Description
      DataSchemaException.MessageFmtgetMsgFormat() 
      +
        +
      • + + +

        Methods inherited from class java.lang.Throwable

        +addSuppressed, fillInStackTrace, getCause, getLocalizedMessage, getMessage, getStackTrace, getSuppressed, initCause, printStackTrace, printStackTrace, printStackTrace, setStackTrace, toString
      • +
      +
        +
      • + + +

        Methods inherited from class java.lang.Object

        +clone, equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, wait
      • +
      +
    • +
    +
  • +
+
+
+
    +
  • + +
      +
    • + + +

      Constructor Detail

      + + + +
        +
      • +

        DataSchemaException

        +
        public DataSchemaException(DataSchemaException.MessageFmt msgFormat,
        +                           java.lang.String... msgArgs)
        +
        Constructs a DataSchemaException.
        +
        +
        Parameters:
        +
        msgFormat - the message format
        +
        msgArgs - the message arguments
        +
        +
      • +
      +
    • +
    + + +
  • +
+
+
+ + + + + + + http://git-wip-us.apache.org/repos/asf/incubator-hawq-site/blob/ab8cf62a/docs/pxf/javadoc/org/apache/hawq/pxf/plugins/hdfs/utilities/HdfsUtilities.html ---------------------------------------------------------------------- diff --git a/docs/pxf/javadoc/org/apache/hawq/pxf/plugins/hdfs/utilities/HdfsUtilities.html b/docs/pxf/javadoc/org/apache/hawq/pxf/plugins/hdfs/utilities/HdfsUtilities.html new file mode 100644 index 0000000..b1f7d0d --- /dev/null +++ b/docs/pxf/javadoc/org/apache/hawq/pxf/plugins/hdfs/utilities/HdfsUtilities.html @@ -0,0 +1,462 @@ + + + + + +HdfsUtilities + + + + + + + + + + + + +
+
org.apache.hawq.pxf.plugins.hdfs.utilities
+

Class HdfsUtilities

+
+
+
    +
  • java.lang.Object
  • +
  • +
      +
    • org.apache.hawq.pxf.plugins.hdfs.utilities.HdfsUtilities
    • +
    +
  • +
+
+
    +
  • +
    +
    +
    public class HdfsUtilities
    +extends java.lang.Object
    +
    HdfsUtilities class exposes helper methods for PXF classes.
    +
  • +
+
+
+
    +
  • + +
      +
    • + + +

      Constructor Summary

      + + + + + + + + +
      Constructors 
      Constructor and Description
      HdfsUtilities() 
      +
    • +
    + +
      +
    • + + +

      Method Summary

      + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
      All Methods Static Methods Concrete Methods 
      Modifier and TypeMethod and Description
      static java.lang.StringabsoluteDataPath(java.lang.String dataSource) +
      Hdfs data sources are absolute data paths.
      +
      static org.apache.avro.SchemagetAvroSchema(org.apache.hadoop.conf.Configuration conf, + java.lang.String dataSource) +
      Accessing the Avro file through the "unsplittable" API just to get the + schema.
      +
      static org.apache.hadoop.io.compress.CompressionCodecgetCodec(org.apache.hadoop.conf.Configuration conf, + java.lang.String name) +
      Helper routine to get compression codec through reflection.
      +
      static booleanisSplittableCodec(org.apache.hadoop.fs.Path path) +
      Returns true if the needed codec is splittable.
      +
      static booleanisThreadSafe(java.lang.String dataDir, + java.lang.String compCodec) +
      Checks if requests should be handle in a single thread or not.
      +
      static org.apache.hadoop.mapred.FileSplitparseFragmentMetadata(InputData inputData) +
      Parses fragment metadata and return matching FileSplit.
      +
      static byte[]prepareFragmentMetadata(org.apache.hadoop.mapred.FileSplit fsp) +
      Prepares byte serialization of a file split information (start, length, + hosts) using ObjectOutputStream.
      +
      static java.lang.StringtoString(java.util.List<OneField> complexRecord, + java.lang.String delimiter) +
      Returns string serialization of list of fields.
      +
      +
        +
      • + + +

        Methods inherited from class java.lang.Object

        +clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
      • +
      +
    • +
    +
  • +
+
+
+
    +
  • + +
      +
    • + + +

      Constructor Detail

      + + + +
        +
      • +

        HdfsUtilities

        +
        public HdfsUtilities()
        +
      • +
      +
    • +
    + +
      +
    • + + +

      Method Detail

      + + + +
        +
      • +

        absoluteDataPath

        +
        public static java.lang.String absoluteDataPath(java.lang.String dataSource)
        +
        Hdfs data sources are absolute data paths. Method ensures that dataSource + begins with '/'.
        +
        +
        Parameters:
        +
        dataSource - The HDFS path to a file or directory of interest. + Retrieved from the client request.
        +
        Returns:
        +
        an absolute data path
        +
        +
      • +
      + + + +
        +
      • +

        getCodec

        +
        public static org.apache.hadoop.io.compress.CompressionCodec getCodec(org.apache.hadoop.conf.Configuration conf,
        +                                                                      java.lang.String name)
        +
        Helper routine to get compression codec through reflection.
        +
        +
        Parameters:
        +
        conf - configuration used for reflection
        +
        name - codec name
        +
        Returns:
        +
        generated CompressionCodec
        +
        +
      • +
      + + + +
        +
      • +

        isSplittableCodec

        +
        public static boolean isSplittableCodec(org.apache.hadoop.fs.Path path)
        +
        Returns true if the needed codec is splittable. If no codec is needed + returns true as well.
        +
        +
        Parameters:
        +
        path - path of the file to be read
        +
        Returns:
        +
        if the codec needed for reading the specified path is splittable.
        +
        +
      • +
      + + + +
        +
      • +

        isThreadSafe

        +
        public static boolean isThreadSafe(java.lang.String dataDir,
        +                                   java.lang.String compCodec)
        +
        Checks if requests should be handle in a single thread or not.
        +
        +
        Parameters:
        +
        dataDir - hdfs path to the data source
        +
        compCodec - the fully qualified name of the compression codec
        +
        Returns:
        +
        if the request can be run in multi-threaded mode.
        +
        +
      • +
      + + + +
        +
      • +

        prepareFragmentMetadata

        +
        public static byte[] prepareFragmentMetadata(org.apache.hadoop.mapred.FileSplit fsp)
        +                                      throws java.io.IOException
        +
        Prepares byte serialization of a file split information (start, length, + hosts) using ObjectOutputStream.
        +
        +
        Parameters:
        +
        fsp - file split to be serialized
        +
        Returns:
        +
        byte serialization of fsp
        +
        Throws:
        +
        java.io.IOException - if I/O errors occur while writing to the underlying + stream
        +
        +
      • +
      + + + +
        +
      • +

        parseFragmentMetadata

        +
        public static org.apache.hadoop.mapred.FileSplit parseFragmentMetadata(InputData inputData)
        +
        Parses fragment metadata and return matching FileSplit.
        +
        +
        Parameters:
        +
        inputData - request input data
        +
        Returns:
        +
        FileSplit with fragment metadata
        +
        +
      • +
      + + + +
        +
      • +

        getAvroSchema

        +
        public static org.apache.avro.Schema getAvroSchema(org.apache.hadoop.conf.Configuration conf,
        +                                                   java.lang.String dataSource)
        +                                            throws java.io.IOException
        +
        Accessing the Avro file through the "unsplittable" API just to get the + schema. The splittable API (AvroInputFormat) which is the one we will be + using to fetch the records, does not support getting the Avro schema yet.
        +
        +
        Parameters:
        +
        conf - Hadoop configuration
        +
        dataSource - Avro file (i.e fileName.avro) path
        +
        Returns:
        +
        the Avro schema
        +
        Throws:
        +
        java.io.IOException - if I/O error occurred while accessing Avro schema file
        +
        +
      • +
      + + + +
        +
      • +

        toString

        +
        public static java.lang.String toString(java.util.List<OneField> complexRecord,
        +                                        java.lang.String delimiter)
        +
        Returns string serialization of list of fields. Fields of binary type + (BYTEA) are converted to octal representation to make sure they will be + relayed properly to the DB.
        +
        +
        Parameters:
        +
        complexRecord - list of fields to be stringified
        +
        delimiter - delimiter between fields
        +
        Returns:
        +
        string of serialized fields using delimiter
        +
        +
      • +
      +
    • +
    +
  • +
+
+
+ + + + + + + http://git-wip-us.apache.org/repos/asf/incubator-hawq-site/blob/ab8cf62a/docs/pxf/javadoc/org/apache/hawq/pxf/plugins/hdfs/utilities/PxfInputFormat.html ---------------------------------------------------------------------- diff --git a/docs/pxf/javadoc/org/apache/hawq/pxf/plugins/hdfs/utilities/PxfInputFormat.html b/docs/pxf/javadoc/org/apache/hawq/pxf/plugins/hdfs/utilities/PxfInputFormat.html new file mode 100644 index 0000000..76548e3 --- /dev/null +++ b/docs/pxf/javadoc/org/apache/hawq/pxf/plugins/hdfs/utilities/PxfInputFormat.html @@ -0,0 +1,352 @@ + + + + + +PxfInputFormat + + + + + + + + + + + + +
+
org.apache.hawq.pxf.plugins.hdfs.utilities
+

Class PxfInputFormat

+
+
+
    +
  • java.lang.Object
  • +
  • +
      +
    • org.apache.hadoop.mapred.FileInputFormat
    • +
    • +
        +
      • org.apache.hawq.pxf.plugins.hdfs.utilities.PxfInputFormat
      • +
      +
    • +
    +
  • +
+
+
    +
  • +
    +
    All Implemented Interfaces:
    +
    org.apache.hadoop.mapred.InputFormat
    +
    +
    +
    +
    public class PxfInputFormat
    +extends org.apache.hadoop.mapred.FileInputFormat
    +
    PxfInputFormat is not intended to read a specific format, hence it implements + a dummy getRecordReader Instead, its purpose is to apply + FileInputFormat.getSplits from one point in PXF and get the splits which are + valid for the actual InputFormats, since all of them we use inherit + FileInputFormat but do not override getSplits.
    +
  • +
+
+
+
    +
  • + +
      +
    • + + +

      Nested Class Summary

      +
        +
      • + + +

        Nested classes/interfaces inherited from class org.apache.hadoop.mapred.FileInputFormat

        +org.apache.hadoop.mapred.FileInputFormat.Counter
      • +
      +
    • +
    + +
      +
    • + + +

      Field Summary

      +
        +
      • + + +

        Fields inherited from class org.apache.hadoop.mapred.FileInputFormat

        +INPUT_DIR_RECURSIVE, LOG, NUM_INPUT_FILES
      • +
      +
    • +
    + +
      +
    • + + +

      Constructor Summary

      + + + + + + + + +
      Constructors 
      Constructor and Description
      PxfInputFormat() 
      +
    • +
    + +
      +
    • + + +

      Method Summary

      + + + + + + + + + + + + + + +
      All Methods Instance Methods Concrete Methods 
      Modifier and TypeMethod and Description
      org.apache.hadoop.mapred.RecordReadergetRecordReader(org.apache.hadoop.mapred.InputSplit split, + org.apache.hadoop.mapred.JobConf conf, + org.apache.hadoop.mapred.Reporter reporter) 
      protected booleanisSplitable(org.apache.hadoop.fs.FileSystem fs, + org.apache.hadoop.fs.Path filename) 
      +
        +
      • + + +

        Methods inherited from class org.apache.hadoop.mapred.FileInputFormat

        +addInputPath, addInputPathRecursively, addInputPaths, computeSplitSize, getBlockIndex, getInputPathFilter, getInputPaths, getSplitHosts, getSplits, listStatus, makeSplit, makeSplit, setInputPathFilter, setInputPaths, setInputPaths, setMinSplitSize
      • +
      +
        +
      • + + +

        Methods inherited from class java.lang.Object

        +clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
      • +
      +
    • +
    +
  • +
+
+
+
    +
  • + +
      +
    • + + +

      Constructor Detail

      + + + +
        +
      • +

        PxfInputFormat

        +
        public PxfInputFormat()
        +
      • +
      +
    • +
    + +
      +
    • + + +

      Method Detail

      + + + +
        +
      • +

        getRecordReader

        +
        public org.apache.hadoop.mapred.RecordReader getRecordReader(org.apache.hadoop.mapred.InputSplit split,
        +                                                             org.apache.hadoop.mapred.JobConf conf,
        +                                                             org.apache.hadoop.mapred.Reporter reporter)
        +                                                      throws java.io.IOException
        +
        +
        Specified by:
        +
        getRecordReader in interface org.apache.hadoop.mapred.InputFormat
        +
        Specified by:
        +
        getRecordReader in class org.apache.hadoop.mapred.FileInputFormat
        +
        Throws:
        +
        java.io.IOException
        +
        +
      • +
      + + + +
        +
      • +

        isSplitable

        +
        protected boolean isSplitable(org.apache.hadoop.fs.FileSystem fs,
        +                              org.apache.hadoop.fs.Path filename)
        +
        +
        Overrides:
        +
        isSplitable in class org.apache.hadoop.mapred.FileInputFormat
        +
        +
      • +
      +
    • +
    +
  • +
+
+
+ + + + + + +