hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yongjun Zhang (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HADOOP-11794) distcp can copy blocks in parallel
Date Thu, 23 Mar 2017 05:40:42 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-11794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Yongjun Zhang updated HADOOP-11794:
-----------------------------------
    Release Note: 
If  a positive value is passed to command line switch -blocksperchunk, files with more blocks
than this value will be split into chunks of `<blocksperchunk>` blocks to be transferred
in parallel, and reassembled on the destination. By default, `<blocksperchunk>` is 0
and the files will be transmitted in their entirety without splitting. This switch is only
applicable when both the source file system supports getBlockLocations and target supports
concat. 


  was:
If  a positive value is passed to command line switch -blocksperchunk, files with more blocks
than this value will be split into chunks of `<blocksperchunk>` blocks to be transferred
in parallel, and reassembled on the destination. By default, `<blocksperchunk>` is 0
and the files will be transmitted in their entirety without splitting. This switch is only
applicable when both the source and target are DistributedFileSystem. 



> distcp can copy blocks in parallel
> ----------------------------------
>
>                 Key: HADOOP-11794
>                 URL: https://issues.apache.org/jira/browse/HADOOP-11794
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: tools/distcp
>    Affects Versions: 0.21.0
>            Reporter: dhruba borthakur
>            Assignee: Yongjun Zhang
>         Attachments: HADOOP-11794.001.patch, HADOOP-11794.002.patch, HADOOP-11794.003.patch,
HADOOP-11794.004.patch, HADOOP-11794.005.patch, HADOOP-11794.006.patch, HADOOP-11794.007.patch,
HADOOP-11794.008.patch, MAPREDUCE-2257.patch
>
>
> The minimum unit of work for a distcp task is a file. We have files that are greater
than 1 TB with a block size of  1 GB. If we use distcp to copy these files, the tasks either
take a long long long time or finally fails. A better way for distcp would be to copy all
the source blocks in parallel, and then stich the blocks back to files at the destination
via the HDFS Concat API (HDFS-222)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Mime
View raw message