hive-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sergio Peña (JIRA) <>
Subject [jira] [Commented] (HIVE-15093) S3-to-S3 Renames: Files should be moved individually rather than at a directory level
Date Fri, 04 Nov 2016 20:24:58 GMT


Sergio Peña commented on HIVE-15093:

I agree with [~yalovyyi] that we should leave file system operations to HDFS and just use
the rename() API, but currently Hadoop is leaving Hive behind on performance because the rename()
on S3 is slow due to serial copies. [~yalovyyi] Do you happen to know if Hadoop 2.8.0 will
make renames() perform better on blobstore systems? When will Hadoop 2.8.0 be released to
the public?

[~stakiar] I don't know if Hadoop will perform different when doing renames on other blobstores.
I think we would need to create a configuration flag specific for each blobstore in case S3
is slow but Azure blobstore is fast. We would want users to use Hive parallel copies with
S3, but leave Azure use their own rename() operation.

> S3-to-S3 Renames: Files should be moved individually rather than at a directory level
> -------------------------------------------------------------------------------------
>                 Key: HIVE-15093
>                 URL:
>             Project: Hive
>          Issue Type: Sub-task
>          Components: Hive
>    Affects Versions: 2.1.0
>            Reporter: Sahil Takiar
>            Assignee: Sahil Takiar
>         Attachments: HIVE-15093.1.patch, HIVE-15093.2.patch, HIVE-15093.3.patch, HIVE-15093.4.patch,
HIVE-15093.5.patch, HIVE-15093.6.patch, HIVE-15093.7.patch
> Hive's MoveTask uses the Hive.moveFile method to move data within a distributed filesystem
as well as blobstore filesystems.
> If the move is done within the same filesystem:
> 1: If the source path is a subdirectory of the destination path, files will be moved
one by one using a threapool of workers
> 2: If the source path is not a subdirectory of the destination path, a single rename
operation is used to move the entire directory
> The second option may not work well on blobstores such as S3. Renames are not metadata
operations and require copying all the data. Client connectors to blobstores may not efficiently
rename directories. Worst case, the connector will copy each file one by one, sequentially
rather than using a threadpool of workers to copy the data (e.g. HADOOP-13600).
> Hive already has code to rename files using a threadpool of workers, but this only occurs
in case number 1.
> This JIRA aims to modify the code so that case 1 is triggered when copying within a blobstore.
The focus is on copies within a blobstore because needToCopy will return true if the src and
target filesystems are different, in which case a different code path is triggered.

This message was sent by Atlassian JIRA

View raw message