Return-Path: X-Original-To: apmail-hawq-commits-archive@minotaur.apache.org Delivered-To: apmail-hawq-commits-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 143B81952F for ; Thu, 21 Apr 2016 23:04:36 +0000 (UTC) Received: (qmail 49350 invoked by uid 500); 21 Apr 2016 23:04:36 -0000 Delivered-To: apmail-hawq-commits-archive@hawq.apache.org Received: (qmail 49312 invoked by uid 500); 21 Apr 2016 23:04:36 -0000 Mailing-List: contact commits-help@hawq.incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@hawq.incubator.apache.org Delivered-To: mailing list commits@hawq.incubator.apache.org Received: (qmail 49303 invoked by uid 99); 21 Apr 2016 23:04:35 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd4-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 21 Apr 2016 23:04:35 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd4-us-west.apache.org (ASF Mail Server at spamd4-us-west.apache.org) with ESMTP id 694C8C0D0D for ; Thu, 21 Apr 2016 23:04:35 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd4-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -2.021 X-Spam-Level: X-Spam-Status: No, score=-2.021 tagged_above=-999 required=6.31 tests=[KAM_ASCII_DIVIDERS=0.8, KAM_LAZY_DOMAIN_SECURITY=1, KAM_MANYCOMMENTS=1.2, RCVD_IN_DNSWL_HI=-5, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, RP_MATCHES_RCVD=-0.001] autolearn=disabled Received: from mx1-lw-us.apache.org ([10.40.0.8]) by localhost (spamd4-us-west.apache.org [10.40.0.11]) (amavisd-new, port 10024) with ESMTP id shBsw3JSDl-h for ; Thu, 21 Apr 2016 23:04:23 +0000 (UTC) Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with SMTP id 9CD9F5FAE6 for ; Thu, 21 Apr 2016 23:04:21 +0000 (UTC) Received: (qmail 48018 invoked by uid 99); 21 Apr 2016 23:04:20 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 21 Apr 2016 23:04:20 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id D3DBCE0103; Thu, 21 Apr 2016 23:04:20 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: shivram@apache.org To: commits@hawq.incubator.apache.org Date: Thu, 21 Apr 2016 23:04:35 -0000 Message-Id: <76e033ebb7c54146b1cef6d3dc7fde62@git.apache.org> In-Reply-To: <7cfd0aabaa89406fb58edc65b6288fe3@git.apache.org> References: <7cfd0aabaa89406fb58edc65b6288fe3@git.apache.org> X-Mailer: ASF-Git Admin Mailer Subject: [16/28] incubator-hawq-site git commit: HAWQ-683. Publish pxf javadoc api http://git-wip-us.apache.org/repos/asf/incubator-hawq-site/blob/ab8cf62a/docs/pxf/javadoc/org/apache/hawq/pxf/plugins/hdfs/AvroFileAccessor.html ---------------------------------------------------------------------- diff --git a/docs/pxf/javadoc/org/apache/hawq/pxf/plugins/hdfs/AvroFileAccessor.html b/docs/pxf/javadoc/org/apache/hawq/pxf/plugins/hdfs/AvroFileAccessor.html new file mode 100644 index 0000000..0f8ecf8 --- /dev/null +++ b/docs/pxf/javadoc/org/apache/hawq/pxf/plugins/hdfs/AvroFileAccessor.html @@ -0,0 +1,381 @@ + + + + + +AvroFileAccessor + + + + + + + + + + + + +
+
org.apache.hawq.pxf.plugins.hdfs
+

Class AvroFileAccessor

+
+
+ +
+ +
+
+ +
+
+
    +
  • + +
      +
    • + + +

      Constructor Detail

      + + + +
        +
      • +

        AvroFileAccessor

        +
        public AvroFileAccessor(InputData input)
        +                 throws java.lang.Exception
        +
        Constructs a AvroFileAccessor that creates the job configuration and + accesses the avro file to fetch the avro schema
        +
        +
        Parameters:
        +
        input - all input parameters coming from the client
        +
        Throws:
        +
        java.lang.Exception - if getting the avro schema fails
        +
        +
      • +
      +
    • +
    + +
      +
    • + + +

      Method Detail

      + + + +
        +
      • +

        getReader

        +
        protected java.lang.Object getReader(org.apache.hadoop.mapred.JobConf jobConf,
        +                                     org.apache.hadoop.mapred.InputSplit split)
        +                              throws java.io.IOException
        +
        Description copied from class: HdfsSplittableDataAccessor
        +
        Specialized accessors will override this method and implement their own + recordReader. For example, a plain delimited text accessor may want to + return a LineRecordReader.
        +
        +
        Specified by:
        +
        getReader in class HdfsSplittableDataAccessor
        +
        Parameters:
        +
        jobConf - the hadoop jobconf to use for the selected InputFormat
        +
        split - the input split to be read by the accessor
        +
        Returns:
        +
        a recordreader to be used for reading the data records of the + split
        +
        Throws:
        +
        java.io.IOException - if recordreader could not be created
        +
        +
      • +
      + + + +
        +
      • +

        readNextObject

        +
        public OneRow readNextObject()
        +                      throws java.io.IOException
        +
        readNextObject + The AVRO accessor is currently the only specialized accessor that + overrides this method. This happens, because the special + AvroRecordReader.next() semantics (use of the AvroWrapper), so it + cannot use the RecordReader's default implementation in + SplittableFileAccessor
        +
        +
        Specified by:
        +
        readNextObject in interface ReadAccessor
        +
        Overrides:
        +
        readNextObject in class HdfsSplittableDataAccessor
        +
        Returns:
        +
        the object which was read
        +
        Throws:
        +
        java.io.IOException
        +
        +
      • +
      +
    • +
    +
  • +
+
+
+ + + + + + + http://git-wip-us.apache.org/repos/asf/incubator-hawq-site/blob/ab8cf62a/docs/pxf/javadoc/org/apache/hawq/pxf/plugins/hdfs/AvroResolver.html ---------------------------------------------------------------------- diff --git a/docs/pxf/javadoc/org/apache/hawq/pxf/plugins/hdfs/AvroResolver.html b/docs/pxf/javadoc/org/apache/hawq/pxf/plugins/hdfs/AvroResolver.html new file mode 100644 index 0000000..94c19a7 --- /dev/null +++ b/docs/pxf/javadoc/org/apache/hawq/pxf/plugins/hdfs/AvroResolver.html @@ -0,0 +1,333 @@ + + + + + +AvroResolver + + + + + + + + + + + + +
+
org.apache.hawq.pxf.plugins.hdfs
+

Class AvroResolver

+
+
+ +
+
    +
  • +
    +
    All Implemented Interfaces:
    +
    ReadResolver
    +
    +
    +
    +
    public class AvroResolver
    +extends Plugin
    +implements ReadResolver
    +
    Class AvroResolver handles deserialization of records that were serialized + using the AVRO serialization framework.
    +
  • +
+
+
+
    +
  • + +
      +
    • + + +

      Field Summary

      + +
    • +
    + +
      +
    • + + +

      Constructor Summary

      + + + + + + + + +
      Constructors 
      Constructor and Description
      AvroResolver(InputData input) +
      Constructs an AvroResolver.
      +
      +
    • +
    + +
      +
    • + + +

      Method Summary

      + + + + + + + + + + +
      All Methods Instance Methods Concrete Methods 
      Modifier and TypeMethod and Description
      java.util.List<OneField>getFields(OneRow row) +
      Returns a list of the fields of one record.
      +
      + +
        +
      • + + +

        Methods inherited from class java.lang.Object

        +clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
      • +
      +
    • +
    +
  • +
+
+
+
    +
  • + +
      +
    • + + +

      Constructor Detail

      + + + +
        +
      • +

        AvroResolver

        +
        public AvroResolver(InputData input)
        +             throws java.io.IOException
        +
        Constructs an AvroResolver. Initializes Avro data structure: the Avro + record - fields information and the Avro record reader. All Avro data is + build from the Avro schema, which is based on the *.avsc file that was + passed by the user
        +
        +
        Parameters:
        +
        input - all input parameters coming from the client
        +
        Throws:
        +
        java.io.IOException - if Avro schema could not be retrieved or parsed
        +
        +
      • +
      +
    • +
    + +
      +
    • + + +

      Method Detail

      + + + +
        +
      • +

        getFields

        +
        public java.util.List<OneField> getFields(OneRow row)
        +                                   throws java.lang.Exception
        +
        Returns a list of the fields of one record. Each record field is + represented by a OneField item. OneField item contains two fields: an + integer representing the field type and a Java Object representing the + field value.
        +
        +
        Specified by:
        +
        getFields in interface ReadResolver
        +
        Parameters:
        +
        row - the row to get the fields from
        +
        Returns:
        +
        the OneField list of one row.
        +
        Throws:
        +
        java.lang.Exception - if decomposing the row into fields failed
        +
        +
      • +
      +
    • +
    +
  • +
+
+
+ + + + + + + http://git-wip-us.apache.org/repos/asf/incubator-hawq-site/blob/ab8cf62a/docs/pxf/javadoc/org/apache/hawq/pxf/plugins/hdfs/ChunkReader.html ---------------------------------------------------------------------- diff --git a/docs/pxf/javadoc/org/apache/hawq/pxf/plugins/hdfs/ChunkReader.html b/docs/pxf/javadoc/org/apache/hawq/pxf/plugins/hdfs/ChunkReader.html new file mode 100644 index 0000000..dfa941d --- /dev/null +++ b/docs/pxf/javadoc/org/apache/hawq/pxf/plugins/hdfs/ChunkReader.html @@ -0,0 +1,397 @@ + + + + + +ChunkReader + + + + + + + + + + + + +
+
org.apache.hawq.pxf.plugins.hdfs
+

Class ChunkReader

+
+
+
    +
  • java.lang.Object
  • +
  • +
      +
    • org.apache.hawq.pxf.plugins.hdfs.ChunkReader
    • +
    +
  • +
+
+
    +
  • +
    +
    All Implemented Interfaces:
    +
    java.io.Closeable, java.lang.AutoCloseable
    +
    +
    +
    +
    public class ChunkReader
    +extends java.lang.Object
    +implements java.io.Closeable
    +
    A class that provides a line reader from an input stream. Lines are + terminated by '\n' (LF) EOF also terminates an otherwise unterminated line.
    +
  • +
+
+
+
    +
  • + +
      +
    • + + +

      Field Summary

      + + + + + + + + + + +
      Fields 
      Modifier and TypeField and Description
      static intDEFAULT_BUFFER_SIZE 
      +
    • +
    + +
      +
    • + + +

      Constructor Summary

      + + + + + + + + +
      Constructors 
      Constructor and Description
      ChunkReader(java.io.InputStream in) +
      Constructs a ChunkReader instance
      +
      +
    • +
    + +
      +
    • + + +

      Method Summary

      + + + + + + + + + + + + + + + + + + +
      All Methods Instance Methods Concrete Methods 
      Modifier and TypeMethod and Description
      voidclose() +
      Closes the underlying stream.
      +
      intreadChunk(org.apache.hadoop.io.Writable str, + int maxBytesToConsume) +
      Reads data in chunks of DEFAULT_CHUNK_SIZE, until we reach + maxBytesToConsume.
      +
      intreadLine(org.apache.hadoop.io.Writable str, + int maxBytesToConsume) +
      Reads a line terminated by LF.
      +
      +
        +
      • + + +

        Methods inherited from class java.lang.Object

        +clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
      • +
      +
    • +
    +
  • +
+
+
+
    +
  • + + + +
      +
    • + + +

      Constructor Detail

      + + + +
        +
      • +

        ChunkReader

        +
        public ChunkReader(java.io.InputStream in)
        +
        Constructs a ChunkReader instance
        +
        +
        Parameters:
        +
        in - input stream
        +
        +
      • +
      +
    • +
    + +
      +
    • + + +

      Method Detail

      + + + +
        +
      • +

        close

        +
        public void close()
        +           throws java.io.IOException
        +
        Closes the underlying stream.
        +
        +
        Specified by:
        +
        close in interface java.io.Closeable
        +
        Specified by:
        +
        close in interface java.lang.AutoCloseable
        +
        Throws:
        +
        java.io.IOException
        +
        +
      • +
      + + + +
        +
      • +

        readChunk

        +
        public int readChunk(org.apache.hadoop.io.Writable str,
        +                     int maxBytesToConsume)
        +              throws java.io.IOException
        +
        Reads data in chunks of DEFAULT_CHUNK_SIZE, until we reach + maxBytesToConsume.
        +
        +
        Parameters:
        +
        str - - output parameter, will contain the read chunk byte array
        +
        maxBytesToConsume - - requested chunk size
        +
        Returns:
        +
        actual chunk size
        +
        Throws:
        +
        java.io.IOException - if the first byte cannot be read for any reason + other than the end of the file, if the input stream has been closed, + or if some other I/O error occurs.
        +
        +
      • +
      + + + +
        +
      • +

        readLine

        +
        public int readLine(org.apache.hadoop.io.Writable str,
        +                    int maxBytesToConsume)
        +             throws java.io.IOException
        +
        Reads a line terminated by LF.
        +
        +
        Parameters:
        +
        str - - output parameter, will contain the read record
        +
        maxBytesToConsume - - the line mustn't exceed this value
        +
        Returns:
        +
        length of the line read
        +
        Throws:
        +
        java.io.IOException - if the first byte cannot be read for any reason + other than the end of the file, if the input stream has been closed, + or if some other I/O error occurs.
        +
        +
      • +
      +
    • +
    +
  • +
+
+
+ + + + + + + http://git-wip-us.apache.org/repos/asf/incubator-hawq-site/blob/ab8cf62a/docs/pxf/javadoc/org/apache/hawq/pxf/plugins/hdfs/ChunkRecordReader.html ---------------------------------------------------------------------- diff --git a/docs/pxf/javadoc/org/apache/hawq/pxf/plugins/hdfs/ChunkRecordReader.html b/docs/pxf/javadoc/org/apache/hawq/pxf/plugins/hdfs/ChunkRecordReader.html new file mode 100644 index 0000000..0b0502b --- /dev/null +++ b/docs/pxf/javadoc/org/apache/hawq/pxf/plugins/hdfs/ChunkRecordReader.html @@ -0,0 +1,462 @@ + + + + + +ChunkRecordReader + + + + + + + + + + + + +
+
org.apache.hawq.pxf.plugins.hdfs
+

Class ChunkRecordReader

+
+
+
    +
  • java.lang.Object
  • +
  • +
      +
    • org.apache.hawq.pxf.plugins.hdfs.ChunkRecordReader
    • +
    +
  • +
+
+
    +
  • +
    +
    All Implemented Interfaces:
    +
    org.apache.hadoop.mapred.RecordReader<org.apache.hadoop.io.LongWritable,ChunkWritable>
    +
    +
    +
    +
    public class ChunkRecordReader
    +extends java.lang.Object
    +implements org.apache.hadoop.mapred.RecordReader<org.apache.hadoop.io.LongWritable,ChunkWritable>
    +
    ChunkRecordReader is designed for fast reading of a file split. The idea is + to bring chunks of data instead of single records. The chunks contain many + records and the chunk end is not aligned on a record boundary. The size of + the chunk is a class hardcoded parameter - CHUNK_SIZE. This behaviour sets + this reader apart from the other readers which will fetch one record and stop + when reaching a record delimiter.
    +
  • +
+
+
+
    +
  • + +
      +
    • + + +

      Constructor Summary

      + + + + + + + + +
      Constructors 
      Constructor and Description
      ChunkRecordReader(org.apache.hadoop.conf.Configuration job, + org.apache.hadoop.mapred.FileSplit split) +
      Constructs a ChunkRecordReader instance.
      +
      +
    • +
    + +
      +
    • + + +

      Method Summary

      + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
      All Methods Instance Methods Concrete Methods 
      Modifier and TypeMethod and Description
      voidclose() +
      Closes the input stream.
      +
      org.apache.hadoop.io.LongWritablecreateKey() +
      Used by the client of this class to create the 'key' output parameter for + next() method.
      +
      ChunkWritablecreateValue() +
      Used by the client of this class to create the 'value' output parameter + for next() method.
      +
      longgetPos() +
      Returns the position of the unread tail of the file
      +
      floatgetProgress() +
      Gets the progress within the split.
      +
      org.apache.hadoop.hdfs.DFSInputStream.ReadStatisticsgetReadStatistics() +
      Returns statistics of the input stream's read operation: total bytes + read, bytes read locally, bytes read in short-circuit (directly from file + descriptor).
      +
      booleannext(org.apache.hadoop.io.LongWritable key, + ChunkWritable value) +
      Fetches the next data chunk from the file split.
      +
      +
        +
      • + + +

        Methods inherited from class java.lang.Object

        +clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
      • +
      +
    • +
    +
  • +
+
+
+
    +
  • + +
      +
    • + + +

      Constructor Detail

      + + + +
        +
      • +

        ChunkRecordReader

        +
        public ChunkRecordReader(org.apache.hadoop.conf.Configuration job,
        +                         org.apache.hadoop.mapred.FileSplit split)
        +                  throws java.io.IOException
        +
        Constructs a ChunkRecordReader instance.
        +
        +
        Parameters:
        +
        job - the job configuration
        +
        split - contains the file name, begin byte of the split and the + bytes length
        +
        Throws:
        +
        java.io.IOException - if an I/O error occurs when accessing the file or + creating input stream to read from it
        +
        +
      • +
      +
    • +
    + +
      +
    • + + +

      Method Detail

      + + + +
        +
      • +

        getReadStatistics

        +
        public org.apache.hadoop.hdfs.DFSInputStream.ReadStatistics getReadStatistics()
        +
        Returns statistics of the input stream's read operation: total bytes + read, bytes read locally, bytes read in short-circuit (directly from file + descriptor).
        +
        +
        Returns:
        +
        an instance of ReadStatistics class
        +
        +
      • +
      + + + +
        +
      • +

        createKey

        +
        public org.apache.hadoop.io.LongWritable createKey()
        +
        Used by the client of this class to create the 'key' output parameter for + next() method.
        +
        +
        Specified by:
        +
        createKey in interface org.apache.hadoop.mapred.RecordReader<org.apache.hadoop.io.LongWritable,ChunkWritable>
        +
        Returns:
        +
        an instance of LongWritable
        +
        +
      • +
      + + + +
        +
      • +

        createValue

        +
        public ChunkWritable createValue()
        +
        Used by the client of this class to create the 'value' output parameter + for next() method.
        +
        +
        Specified by:
        +
        createValue in interface org.apache.hadoop.mapred.RecordReader<org.apache.hadoop.io.LongWritable,ChunkWritable>
        +
        Returns:
        +
        an instance of ChunkWritable
        +
        +
      • +
      + + + +
        +
      • +

        next

        +
        public boolean next(org.apache.hadoop.io.LongWritable key,
        +                    ChunkWritable value)
        +             throws java.io.IOException
        +
        Fetches the next data chunk from the file split. The size of the chunk is + a class hardcoded parameter - CHUNK_SIZE. This behaviour sets this reader + apart from the other readers which will fetch one record and stop when + reaching a record delimiter.
        +
        +
        Specified by:
        +
        next in interface org.apache.hadoop.mapred.RecordReader<org.apache.hadoop.io.LongWritable,ChunkWritable>
        +
        Parameters:
        +
        key - - output parameter. When method returns will contain the key - + the number of the start byte of the chunk
        +
        value - - output parameter. When method returns will contain the + value - the chunk, a byte array inside the ChunkWritable + instance
        +
        Returns:
        +
        false - when end of split was reached
        +
        Throws:
        +
        java.io.IOException - if an I/O error occurred while reading the next chunk + or line
        +
        +
      • +
      + + + +
        +
      • +

        getProgress

        +
        public float getProgress()
        +                  throws java.io.IOException
        +
        Gets the progress within the split.
        +
        +
        Specified by:
        +
        getProgress in interface org.apache.hadoop.mapred.RecordReader<org.apache.hadoop.io.LongWritable,ChunkWritable>
        +
        Throws:
        +
        java.io.IOException
        +
        +
      • +
      + + + +
        +
      • +

        getPos

        +
        public long getPos()
        +            throws java.io.IOException
        +
        Returns the position of the unread tail of the file
        +
        +
        Specified by:
        +
        getPos in interface org.apache.hadoop.mapred.RecordReader<org.apache.hadoop.io.LongWritable,ChunkWritable>
        +
        Returns:
        +
        pos - start byte of the unread tail of the file
        +
        Throws:
        +
        java.io.IOException
        +
        +
      • +
      + + + +
        +
      • +

        close

        +
        public void close()
        +           throws java.io.IOException
        +
        Closes the input stream.
        +
        +
        Specified by:
        +
        close in interface org.apache.hadoop.mapred.RecordReader<org.apache.hadoop.io.LongWritable,ChunkWritable>
        +
        Throws:
        +
        java.io.IOException
        +
        +
      • +
      +
    • +
    +
  • +
+
+
+ + + + + + + http://git-wip-us.apache.org/repos/asf/incubator-hawq-site/blob/ab8cf62a/docs/pxf/javadoc/org/apache/hawq/pxf/plugins/hdfs/ChunkWritable.html ---------------------------------------------------------------------- diff --git a/docs/pxf/javadoc/org/apache/hawq/pxf/plugins/hdfs/ChunkWritable.html b/docs/pxf/javadoc/org/apache/hawq/pxf/plugins/hdfs/ChunkWritable.html new file mode 100644 index 0000000..da72fdf --- /dev/null +++ b/docs/pxf/javadoc/org/apache/hawq/pxf/plugins/hdfs/ChunkWritable.html @@ -0,0 +1,349 @@ + + + + + +ChunkWritable + + + + + + + + + + + + +
+
org.apache.hawq.pxf.plugins.hdfs
+

Class ChunkWritable

+
+
+
    +
  • java.lang.Object
  • +
  • +
      +
    • org.apache.hawq.pxf.plugins.hdfs.ChunkWritable
    • +
    +
  • +
+
+
    +
  • +
    +
    All Implemented Interfaces:
    +
    org.apache.hadoop.io.Writable
    +
    +
    +
    +
    public class ChunkWritable
    +extends java.lang.Object
    +implements org.apache.hadoop.io.Writable
    +
    Just an output buffer for the ChunkRecordReader. It must extend Writable + otherwise it will not fit into the next() interface method
    +
  • +
+
+
+
    +
  • + +
      +
    • + + +

      Field Summary

      + + + + + + + + + + +
      Fields 
      Modifier and TypeField and Description
      byte[]box 
      +
    • +
    + +
      +
    • + + +

      Constructor Summary

      + + + + + + + + +
      Constructors 
      Constructor and Description
      ChunkWritable() 
      +
    • +
    + +
      +
    • + + +

      Method Summary

      + + + + + + + + + + + + + + +
      All Methods Instance Methods Concrete Methods 
      Modifier and TypeMethod and Description
      voidreadFields(java.io.DataInput in) +
      Deserializes the fields of this object from in.
      +
      voidwrite(java.io.DataOutput out) +
      Serializes the fields of this object to out.
      +
      +
        +
      • + + +

        Methods inherited from class java.lang.Object

        +clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
      • +
      +
    • +
    +
  • +
+
+
+
    +
  • + +
      +
    • + + +

      Field Detail

      + + + +
        +
      • +

        box

        +
        public byte[] box
        +
      • +
      +
    • +
    + +
      +
    • + + +

      Constructor Detail

      + + + +
        +
      • +

        ChunkWritable

        +
        public ChunkWritable()
        +
      • +
      +
    • +
    + +
      +
    • + + +

      Method Detail

      + + + +
        +
      • +

        write

        +
        public void write(java.io.DataOutput out)
        +
        Serializes the fields of this object to out.
        +
        +
        Specified by:
        +
        write in interface org.apache.hadoop.io.Writable
        +
        Parameters:
        +
        out - DataOutput to serialize this object into.
        +
        Throws:
        +
        java.lang.UnsupportedOperationException - this function is not supported
        +
        +
      • +
      + + + +
        +
      • +

        readFields

        +
        public void readFields(java.io.DataInput in)
        +
        Deserializes the fields of this object from in. +

        For efficiency, implementations should attempt to re-use storage in the + existing object where possible.

        +
        +
        Specified by:
        +
        readFields in interface org.apache.hadoop.io.Writable
        +
        Parameters:
        +
        in - DataInput to deserialize this object from.
        +
        Throws:
        +
        java.lang.UnsupportedOperationException - this function is not supported
        +
        +
      • +
      +
    • +
    +
  • +
+
+
+ + + + + + +