hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hadoop QA (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HDFS-630) In DFSOutputStream.nextBlockOutputStream(), the client can exclude specific datanodes when locating the next block.
Date Mon, 16 Nov 2009 19:50:39 GMT

    [ https://issues.apache.org/jira/browse/HDFS-630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12778482#action_12778482

Hadoop QA commented on HDFS-630:

-1 overall.  Here are the results of testing the latest attachment 
  against trunk revision 880630.

    +1 @author.  The patch does not contain any @author tags.

    +1 tests included.  The patch appears to include 7 new or modified tests.

    +1 javadoc.  The javadoc tool did not generate any warning messages.

    -1 javac.  The applied patch generated 21 javac compiler warnings (more than the trunk's
current 20 warnings).

    +1 findbugs.  The patch does not introduce any new Findbugs warnings.

    +1 release audit.  The applied patch does not increase the total number of release audit

    -1 core tests.  The patch failed core unit tests.

    +1 contrib tests.  The patch passed contrib unit tests.

Test results: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/114/testReport/
Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/114/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/114/artifact/trunk/build/test/checkstyle-errors.html
Console output: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/114/console

This message is automatically generated.

> In DFSOutputStream.nextBlockOutputStream(), the client can exclude specific datanodes
when locating the next block.
> -------------------------------------------------------------------------------------------------------------------
>                 Key: HDFS-630
>                 URL: https://issues.apache.org/jira/browse/HDFS-630
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>          Components: hdfs client
>    Affects Versions: 0.21.0
>            Reporter: Ruyue Ma
>            Assignee: Ruyue Ma
>            Priority: Minor
>         Attachments: 0001-Fix-HDFS-630-for-0.21-and-trunk-unified.patch, 0001-Fix-HDFS-630-for-0.21.patch,
0001-Fix-HDFS-630-svn.patch, HDFS-630.patch
> created from hdfs-200.
> If during a write, the dfsclient sees that a block replica location for a newly allocated
block is not-connectable, it re-requests the NN to get a fresh set of replica locations of
the block. It tries this dfs.client.block.write.retries times (default 3), sleeping 6 seconds
between each retry ( see DFSClient.nextBlockOutputStream).
> This setting works well when you have a reasonable size cluster; if u have few datanodes
in the cluster, every retry maybe pick the dead-datanode and the above logic bails out.
> Our solution: when getting block location from namenode, we give nn the excluded datanodes.
The list of dead datanodes is only for one block allocation.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message