hive-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Phabricator (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HIVE-3874) Create a new Optimized Row Columnar file format for Hive
Date Thu, 21 Feb 2013 11:08:19 GMT

    [ https://issues.apache.org/jira/browse/HIVE-3874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13583107#comment-13583107
] 

Phabricator commented on HIVE-3874:
-----------------------------------

njain has commented on the revision "HIVE-3874 [jira] Create a new Optimized Row Columnar
file format for Hive".

  Right now, the RLE is fixed. Should it be pluggable ? I mean - we can have a different scheme
to
  store deltas.


  It is perfectly fine to do all these changes in follow-ups, Can you file jiras for them,
as you see
  appropriate. That way, once the basic framework is in, other people can also jump in

INLINE COMMENTS
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/RunLengthIntegerWriter.java:84 Is this correct
-- should you be comparing with literals[0] ?
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/RunLengthIntegerWriter.java:105 If I understand
it right, we are not optimizing for Deltas:
  If the data is:

  10
  11
  12
  13
  14

  We will be storing each value separately

REVISION DETAIL
  https://reviews.facebook.net/D8529

To: JIRA, omalley
Cc: kevinwilfong, njain

                
> Create a new Optimized Row Columnar file format for Hive
> --------------------------------------------------------
>
>                 Key: HIVE-3874
>                 URL: https://issues.apache.org/jira/browse/HIVE-3874
>             Project: Hive
>          Issue Type: Improvement
>          Components: Serializers/Deserializers
>            Reporter: Owen O'Malley
>            Assignee: Owen O'Malley
>         Attachments: hive.3874.2.patch, HIVE-3874.D8529.1.patch, HIVE-3874.D8529.2.patch,
OrcFileIntro.pptx, orc.tgz
>
>
> There are several limitations of the current RC File format that I'd like to address
by creating a new format:
> * each column value is stored as a binary blob, which means:
> ** the entire column value must be read, decompressed, and deserialized
> ** the file format can't use smarter type-specific compression
> ** push down filters can't be evaluated
> * the start of each row group needs to be found by scanning
> * user metadata can only be added to the file when the file is created
> * the file doesn't store the number of rows per a file or row group
> * there is no mechanism for seeking to a particular row number, which is required for
external indexes.
> * there is no mechanism for storing light weight indexes within the file to enable push-down
filters to skip entire row groups.
> * the type of the rows aren't stored in the file

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message