hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yongjun Zhang (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-11794) distcp can copy blocks in parallel
Date Thu, 23 Mar 2017 06:49:42 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-11794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15937830#comment-15937830
] 

Yongjun Zhang commented on HADOOP-11794:
----------------------------------------

Thanks [~chris.douglas].

Uploaded rev9 as outlined. Would all reviewers please take a look? thanks a lot!

Would appreciate if you could help testing ADLS with this version, [~omkarksa]!

With this patch:
- removed DistributedFileSystem checking
- in order to enable the feature, the source FS need to implements getBlockLocations and the
target FS implements concat
- check concat support at the beginning of distcp, throw exception when -blocksperchunk is
passed and concat is not supported 

{quote}
Doesn't this imply that FileSystems that support concat can also be left in an inconsistent
state? If the concat operation fails, the job is killed/fails/dies, etc. then distcp cleanup
should remove partial work. If a FileSystem doesn't support concat, shouldn't that failure
follow the same path?
{quote}
Yes, indeed. However, if we run the same job again successfully, it will clean up the temporary
chunk files. We could clean up the chunk files when concat fails if we really want. However,
given concat is supported, if concat failed, we need to know why it failed, keeping the chunk
files would help debugging. If the files are good, we have the potential of concat them manually.
 If distcp failed in the middle for other reason,  the source and target will be different
anyways.

Thanks.


> distcp can copy blocks in parallel
> ----------------------------------
>
>                 Key: HADOOP-11794
>                 URL: https://issues.apache.org/jira/browse/HADOOP-11794
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: tools/distcp
>    Affects Versions: 0.21.0
>            Reporter: dhruba borthakur
>            Assignee: Yongjun Zhang
>         Attachments: HADOOP-11794.001.patch, HADOOP-11794.002.patch, HADOOP-11794.003.patch,
HADOOP-11794.004.patch, HADOOP-11794.005.patch, HADOOP-11794.006.patch, HADOOP-11794.007.patch,
HADOOP-11794.008.patch, HADOOP-11794.009.patch, MAPREDUCE-2257.patch
>
>
> The minimum unit of work for a distcp task is a file. We have files that are greater
than 1 TB with a block size of  1 GB. If we use distcp to copy these files, the tasks either
take a long long long time or finally fails. A better way for distcp would be to copy all
the source blocks in parallel, and then stich the blocks back to files at the destination
via the HDFS Concat API (HDFS-222)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Mime
View raw message