hive-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Chris Nauroth (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HIVE-14270) Write temporary data to HDFS when doing inserts on tables located on S3
Date Thu, 28 Jul 2016 17:27:20 GMT

    [ https://issues.apache.org/jira/browse/HIVE-14270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15397859#comment-15397859
] 

Chris Nauroth commented on HIVE-14270:
--------------------------------------

Separating development of a richer Hive-on-S3 integration test suite into a separate JIRA
sounds reasonable to me.  I expect the initial bootstrapping would be a large effort on its
own.  If you'd like more details on how Hadoop is handling that, please feel free to notify
those of us from Hadoop who do a lot of object store integration work when you file that new
JIRA.

Steve is out for several weeks now, so I don't expect further responses from him for a while.

> Write temporary data to HDFS when doing inserts on tables located on S3
> -----------------------------------------------------------------------
>
>                 Key: HIVE-14270
>                 URL: https://issues.apache.org/jira/browse/HIVE-14270
>             Project: Hive
>          Issue Type: Sub-task
>            Reporter: Sergio Peña
>            Assignee: Sergio Peña
>         Attachments: HIVE-14270.1.patch
>
>
> Currently, when doing INSERT statements on tables located at S3, Hive writes and reads
temporary (or intermediate) files to S3 as well. 
> If HDFS is still the default filesystem on Hive, then we can keep such temporary files
on HDFS to keep things run faster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message