hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Tsz Wo Nicholas Sze (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-12594) SnapshotDiff - snapshotDiff fails if the snapshotDiff report exceeds the RPC response limit
Date Mon, 30 Oct 2017 19:10:00 GMT

    [ https://issues.apache.org/jira/browse/HDFS-12594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16225571#comment-16225571

Tsz Wo Nicholas Sze commented on HDFS-12594:

> Implementing RemoteIterator class so that rpc calls are made on demand while consuming
the diff may not be possible here. ...

This is about the distcp use case but not all use cases.

- In SnapshotDiff.run(), it print out the the entire report by calling SnapshotDiffReport.toString()
which iterates diffList,  Obviously, we could print each entry when iterating the report.

- In DistCpSync.getAllDiffs(), it only iterates report.getDiffList() but not requires the
entire report at once.

If getSnapshotDiffReport(..) does not use RemoteIterator and requires the entire report in
memory, it won't work for all the tools/applications for large reprots.  If it uses RemoteIterator,
at least SnapshotDiff will work.  (You are probably right that DistCpSync still needs the
entire report in memory.)

> SnapshotDiff - snapshotDiff fails if the snapshotDiff report exceeds the RPC response
> -------------------------------------------------------------------------------------------
>                 Key: HDFS-12594
>                 URL: https://issues.apache.org/jira/browse/HDFS-12594
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: hdfs
>            Reporter: Shashikant Banerjee
>            Assignee: Shashikant Banerjee
>         Attachments: HDFS-12594.001.patch, HDFS-12594.002.patch, HDFS-12594.003.patch,
HDFS-12594.004.patch, SnapshotDiff_Improvemnets .pdf
> The snapshotDiff command fails if the snapshotDiff report size is larger than the configuration
value of ipc.maximum.response.length which is by default 128 MB. 
> Worst case, with all Renames ops in sanpshots each with source and target name equal
to MAX_PATH_LEN which is 8k characters, this would result in at 8192 renames.
> SnapshotDiff is currently used by distcp to optimize copy operations and in case of the
the diff report exceeding the limit , it fails with the below exception:
> Test set: org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport
> -------------------------------------------------------------------------------
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 112.095 sec <<<
FAILURE! - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport
> testDiffReportWithMillionFiles(org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport)
 Time elapsed: 111.906 sec  <<< ERROR!
> java.io.IOException: Failed on local exception: org.apache.hadoop.ipc.RpcException: RPC
response exceeds maximum data length; Host Details : local host is: "hw15685.local/";
destination host is: "localhost":59808;
> Attached is the proposal for the changes required.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org

View raw message