hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yongjun Zhang (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-11794) distcp can copy blocks in parallel
Date Wed, 01 Feb 2017 08:17:51 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-11794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15848116#comment-15848116
] 

Yongjun Zhang commented on HADOOP-11794:
----------------------------------------

Hi [~mithun],

Thanks you so much for the review and all the good comments!

I just uploaded rev 004 to address all of them.

* To answer your question in 3, to avoid the extra RPC call to get all blocks of a file, I
checked the file size first, if so, then get all blocks of the file, and check if the number
of blocks is bigger than {{blocksPerChunk}}. So it's possible that a file with many small
blocks are not split. But I think that should be ok, because the patch here intend to deal
with really large file.
* About 6. the logging is already done in the method {{mergeFileChunks}}, when debug logging
is enabled.

In addition, I also added one more condition to check if the source FS is DistributedFileSystem,
otherwise, the file won't be splitted too.

Wonder if you could take a look at the new patch.

Thanks a lot.




> distcp can copy blocks in parallel
> ----------------------------------
>
>                 Key: HADOOP-11794
>                 URL: https://issues.apache.org/jira/browse/HADOOP-11794
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: tools/distcp
>    Affects Versions: 0.21.0
>            Reporter: dhruba borthakur
>            Assignee: Yongjun Zhang
>         Attachments: HADOOP-11794.001.patch, HADOOP-11794.002.patch, HADOOP-11794.003.patch,
HADOOP-11794.004.patch, MAPREDUCE-2257.patch
>
>
> The minimum unit of work for a distcp task is a file. We have files that are greater
than 1 TB with a block size of  1 GB. If we use distcp to copy these files, the tasks either
take a long long long time or finally fails. A better way for distcp would be to copy all
the source blocks in parallel, and then stich the blocks back to files at the destination
via the HDFS Concat API (HDFS-222)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Mime
View raw message