hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yongjun Zhang (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-11794) distcp can copy blocks in parallel
Date Wed, 09 Dec 2015 05:56:11 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-11794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15048097#comment-15048097
] 

Yongjun Zhang commented on HADOOP-11794:
----------------------------------------

We probably don't need ChunkUniformSizeInputFormat, and just use UniformSizeInputFormat (When
we break large file into chunks, it make the split more uniform), when a file doesn't need
to be break into chunks, there is a single entry for it in the fileListing, and we make the
entry's chunkLength the same as its file length.  

I was thinking that for the initial implementation, we can just change the CopyCommitter,
as I described in last comment, instead of introducing a reducer stage for distcp.

Welcome to comment. Thanks.




> distcp can copy blocks in parallel
> ----------------------------------
>
>                 Key: HADOOP-11794
>                 URL: https://issues.apache.org/jira/browse/HADOOP-11794
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: tools/distcp
>    Affects Versions: 0.21.0
>            Reporter: dhruba borthakur
>            Assignee: Yongjun Zhang
>         Attachments: MAPREDUCE-2257.patch
>
>
> The minimum unit of work for a distcp task is a file. We have files that are greater
than 1 TB with a block size of  1 GB. If we use distcp to copy these files, the tasks either
take a long long long time or finally fails. A better way for distcp would be to copy all
the source blocks in parallel, and then stich the blocks back to files at the destination
via the HDFS Concat API (HDFS-222)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message